Recent findings from the 2025 State of Data Security Report by Varonis paint a stark picture of cloud security challenges exacerbated by the widespread adoption of artificial intelligence (AI) tools. The report, which analysed 1,000 IT environments, highlights alarming gaps in governance that leave sensitive data perilously exposed. Among its most striking revelations, an overwhelming 99% of organisations reported that sensitive information was accessible to various AI applications, underscoring a critical risk in the current digital landscape.

These vulnerabilities extend to the use of unverified and unsanctioned applications—dubbed ‘shadow AI’—with 98% of enterprises admitting to operating these tools, which bypass standard security protocols. The extensive reliance on these applications raises concerns about potential data leaks and unauthorised access. Further compounding the problem, a staggering 88% of organisations still have dormant user accounts, often referred to as ghost users, which remain active and can be exploited by malicious actors to pivot and infiltrate deeper into corporate systems.

The report accentuates that poor identity management practices, combined with insufficient enforcement of security measures—such as multifactor authentication (MFA)—create a precarious environment where breaches can occur with alarming ease. Notably, the absence of MFA enforcement, recorded in 14% of organisations, has already been linked to significant security breaches in recent years. Many companies also struggle with inadequate data governance, as only 10% of organisations employ proper file labeling, which is crucial for effective access control and compliance.

The pervasive nature of AI within corporate structures further complicates security protocols. As Varonis points out, the very benefits that AI tools offer—enhanced productivity and data insights—can also serve as a double-edged sword. “AI acts like a hungry Pac-Man, scanning and analysing all the data it can grab. If AI surfaces critical data where it doesn’t belong, it’s game over,” the company articulated in a related blog post. Addressing this requires a robust approach to management that not only embraces AI’s opportunities but also defines stringent frameworks around data protection.

Insights gleaned from the growing menace of shadow AI emphasise the significance of recognising and safeguarding the ‘crown jewels’ of data. As experts like Brian Vecci of Varonis note, comprehensive data management strategies are essential to prevent breaches stemming from unauthorised applications. This theme resonates beyond corporate entities, echoing among federal agencies grappling with similar security dilemmas. A data-centric security posture is increasingly deemed necessary to protect sensitive information amidst the complexities introduced by AI and cloud environments.

The ongoing surge in data breaches linked to generative AI tools showcases a pressing need for organisations to fortify their security measures. A pertinent case was highlighted where a former employee used a generative AI copilot to access and exfiltrate sensitive customer data, reiterating the tangible risks that such tools can pose without rigorous oversight and security frameworks.

Consequently, the report urges organisations to confront their exposure levels, adopt stringent access controls, and treat data security as fundamental to responsible AI application. As AI’s role in the workplace continues to evolve, the interplay between convenience and security must be delicately balanced to safeguard sensitive information effectively.

For those keen on developing a holistic approach to their data governance and security, investing in advanced data lifecycle management, enhanced classification capabilities, and automated compliance monitoring tools will be pivotal as the landscape of AI continues to reshape security paradigms.

Reference Map:

Source: Noah Wire Services