A group of cybercriminals, identified as UNC6032, has been leveraging the rising interest in artificial intelligence by promoting malicious ads on social media platforms, thereby endangering users’ sensitive information such as credentials and credit card details. According to a report by Mandiant, a subsidiary of Google focused on cybersecurity, thousands of fraudulent advertisements were detected on Facebook, with a smaller number appearing on LinkedIn since November 2024. These ads lead to over 30 deceptive websites that disguise themselves as legitimate AI video generation tools, such as Luma AI and Canva Dream Lab, which falsely advertise capabilities for converting text and images into videos.

Once users unwittingly click on a “Start Free Now” button on these spoofed websites, they encounter a faux video-generation interface that appears legitimate. After engaging with the interface and triggering a phony loading sequence, users are prompted to download a ZIP file containing malware. This malware, once executed, compromises the victim’s device by establishing a backdoor, logging keystrokes, and scanning for password managers and digital wallets. The scale of UNC6032’s operations is staggering, with Mandiant noting that their malicious ads have reached more than two million users on Facebook and LinkedIn, although the firm clarified that this reach doesn’t equate to the number of successful infections.

Meta, Facebook’s parent company, has responded to this evolving threat landscape by removing the malicious ads, blocking associated URLs, and shutting down the rogue accounts responsible for disseminating these frauds. A Meta spokesperson acknowledged the ongoing challenges posed by cybercriminals, asserting that “Cyber criminals constantly evolve their tactics to evade detection.” Despite their efforts to mitigate the risks posed by these campaigns, the company admits it is difficult to determine the exact number of victims.

In conjunction with Mandiant’s findings, reports from other cybersecurity analysts reveal similar patterns of deception used by different actors, including the promotion of fake AI services like OpenAI’s ChatGPT and DALL-E through suspect Facebook pages and advertisements. Such engagements lead unsuspecting individuals into compromising positions where they download malware designed to exfiltrate sensitive data, often sold on dark web marketplaces or used to conduct further online fraud.

These malicious campaigns are not merely isolated incidents; they form a broader, increasingly sophisticated ecosystem of cybercrime. Reports indicate that as of 2023, Meta had disrupted nearly ten new malware strains and blocked over 1,000 dangerous URLs linked to fraudulent activities. This proactive stance underscores the ongoing battle between social media platforms and cybercriminals, which exploits both popular trends and established trust in AI technologies.

Further scrutiny reveals that UNC6032 actively rotates the domains featured in its ads to circumvent detection, creating new advertisements daily to maintain pressure on online security measures. This relentless cycle of adaptation is augmented by the use of advanced malware families capable of performing sophisticated tasks, such as keylogging and extracting sensitive data, which heightens the stakes for potential victims.

Mandiant has also highlighted the ease with which such deceptive AI websites can draw in users, pointing out their infrastructure of fake promotions and hijacked accounts that continuously lure in those seeking legitimate tools. This highlights a critical vulnerability in online safety and the urgent need for heightened vigilance among users, particularly as the allure of AI technologies continues to grow.

In summary, the threat posed by UNC6032 and similar factions illustrates the darker side of increased reliance on digital tools, especially within the AI landscape. As organisations like Meta strive to shore up defences, individuals must remain acutely aware of the potential risks associated with seemingly innocuous online interactions to protect their personal and financial information.

Reference Map:

Source: Noah Wire Services