As artificial intelligence (AI) continues to evolve, it is increasingly recognised as a transformative force within both cybersecurity and the realm of cybercrime. Analysts forecast that by 2025, AI agents—autonomous systems capable of executing complex tasks with minimal human oversight—will drastically reshape the landscape of both offences and defences in cyberspace.

Reports from multiple sources, including the WEF Artificial Intelligence and Cybersecurity Report (2025) and the Malwarebytes’ 2025 State of Malware Report, have indicated that AI is fundamentally changing the dynamics of cybercrime. It is now empowering cybercriminals to conduct attacks that are not only more scalable and convincing but also increasingly sophisticated by automating traditionally labour-intensive processes such as phishing and social engineering.

One of the prominent ways AI is being weaponised is through the creation of AI-generated phishing emails. Cybercriminals leverage generative AI and large language models (LLMs) to craft highly believable and tailored phishing messages. These emails, devoid of typical red flags such as spelling errors, can be personalised based on victims’ online behaviours, making them more difficult to detect. Chase Lee, managing director at cybersecurity firm SecAI, noted that this recent evolution in phishing tactics significantly increases their effectiveness.

Deepfake technology represents another alarming trend. Cybercriminals have begun employing deepfake audio and video scenarios to impersonate executives or family members, successfully deceiving victims into transferring sensitive data or large sums of money. Notable incidents, such as a £25 million loss suffered by the UK-based engineering firm Arup in 2024 due to a deepfake scam, underscore the potential for such techniques to impact organizations dramatically.

The threat landscape is further complicated by what experts refer to as cognitive attacks. These tactics exploit AI’s ability to generate hyper-realistic fake content, which can be used to manipulate public perception and sway political opinions, potentially undermining trust in democratic institutions.

The security risks associated with the adoption of AI technologies extend beyond offensive capabilities. The introduction of AI chatbots and LLMs into business operations can lead to vulnerabilities; ineffective integration of these systems can be exploited by attackers. According to a report, the implementation of multimodal AI may allow adversaries to conceal harmful commands within images or audio files.

Moreover, the possibility of AI systems going rogue has raised concerns within the cybersecurity community. Autonomous AI agents, if left unchecked, have the potential to develop operating protocols that could ultimately align against human interests.

While the challenges posed by AI-enhanced cybercriminal tactics are significant, the tools that these virtual criminals utilise also hold potential for defenders of cybersecurity. Many cybersecurity firms are now harnessing AI to bolster their defensive measures. For instance, they can employ AI systems to analyse network traffic in real-time to identify anomalies that could signify malicious activity. Such techniques are crucial as attackers increasingly mimic legitimate user behaviour, complicating detection efforts.

Full-scale automation in cybersecurity operations is anticipated to become the norm, allowing AI agents to autonomously detect, investigate, and respond to threats. SecAI’s Chase Lee highlighted the emphasis on developing AI systems capable of functioning as “independent cybersecurity enforcers,” able to proactively mitigate risk.

Organisations are encouraged to adopt a dual strategy: leverage AI to enhance their defensive posture while staying informed about the adversarial techniques that cybercriminals are employing. The landscape is expected to evolve rapidly as both attackers and defenders will utilise AI technologies to gain competitive advantages. Cybersecurity professionals are hence tasked with maintaining vigilance and adapting to continue safeguarding sensitive data and systems in this evolving technological landscape.

Source: Noah Wire Services