Cybercriminals Using ChatGPT Popularity to Spread Malware via Facebook Accounts, CloudSEK Says

Share

Cyber criminals are exploiting popularity of ChatGPT to spread malware through hijacked Facebook accounts, cyber intelligence firm CloudSEK said on Monday.

CloudSEK in its investigation has found the presence of 13 Facebook pages or accounts including those with Indian content, totalling over 5 lakh followers, that have been compromised and are being used to disseminate the malware via Facebook ads.

“Cybercriminals are capitalising on the popularity of ChatGPT, exploiting Facebook’s vast user base by compromising legitimate Facebook accounts to distribute malware via Facebook ads, putting users’ security at risk. Our investigation has uncovered 13 compromised pages with over 500k followers, some of which have been hijacked since February 2023. We urge users to be vigilant and aware of such malicious activities on the platform,” CloudSEK cyber intelligence analyst Bablu Kumar said.

CloudSEK claims to have uncovered at least 25 websites engaged in the nefarious practice of impersonating the OpenAI website, which are malicious sites that are duping individuals into downloading and installing harmful software, posing a severe risk to their security and privacy.

“The malicious malware is not only capable of stealing sensitive information such as PII, system information, and credit card details from the user’s device, but also has replication capabilities to spread across systems through removable media. With the ability to escalate privileges and persistently remain on the system, it poses a significant threat,” Kumar said.


Realme might not want the Mini Capsule to be the defining feature of the Realme C55, but will it end up being one of the phone’s most talked-about hardware specifications? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.