A global study of over 32,000 workers across 47 countries finds that while AI tools like ChatGPT improve productivity, many employees use them unsafely or unethically due to lack of clear policies, training and transparency within organisations.
A comprehensive global study involving over 32,000 workers across 47 countries has revealed significant insights into the adoption and use of artificial intelligence (AI) tools in the workplace. The research highlights both the widespread integration of AI technologies and the considerable challenges organisations face in managing their use effectively.
The study, conducted across diverse geographical regions and occupational groups, found that 58% of employees actively use AI at work, with around one-third engaging with such tools on a weekly or daily basis. General-purpose generative AI tools, notably ChatGPT, are the most commonly utilised, with approximately 70% of employees turning to free, public AI services rather than employer-provided solutions, which are used by 42%.
Employees reported several performance improvements linked to AI adoption, including increased efficiency (67%), improved access to information (61%), enhanced innovation (59%) and better quality of work (58%). These findings align with previous research evidencing productivity gains enabled by AI.
Despite these benefits, the study uncovered a number of concerning practices associated with AI use at work. Nearly half of AI users (48%) admitted to uploading sensitive company or customer information into public AI tools, and 44% used AI in ways that contravene organisational policies. Such actions raise significant privacy and data security concerns, as other studies have also noted that 27% of information entered into AI tools by employees is sensitive.
Complacency in the use of AI was another notable issue. Two-thirds of respondents (66%) confessed to relying on AI outputs without adequate verification, contributing to errors in their work for more than half (56%). Younger employees aged between 18 and 34 years were more prone to both inappropriate and uncritical use of AI compared to their older colleagues, signalling potential generational differences in engagement with AI technologies.
Furthermore, the research identified a phenomenon termed “shadow AI” use, where employees do not disclose their reliance on AI tools. A majority (61%) avoided revealing their AI usage, 55% presented AI-generated content as their own, and 66% used AI without knowing if this was permitted in their workplace. This lack of transparency complicates organisations’ efforts to monitor and manage AI-related risks and accountability.
The underlying cause of these risky behaviours appears to be a deficit in organisational governance, training and guidance related to AI. Only about a third of employees reported that their employer had policies specifically addressing the use of generative AI tools, while a mere 6% indicated their organisation prohibited AI use altogether. The pressure to adopt AI was also evident; half of employees expressed concern about being left behind if they did not engage with such technologies.
Experts involved in the study emphasised the critical need for a structured approach to AI governance. They advocate investment in responsible AI training and raising employees’ AI literacy, which includes knowledge, training and confidence in using AI responsibly. The research suggests that better AI literacy correlates with more thoughtful use of AI, including verification of outputs and understanding of AI limitations, as well as increased trust and performance benefits associated with these tools.
In addition to formal policies and training, establishing a psychologically safe workplace culture was highlighted as essential. Such an environment encourages employees to openly share their AI use, fostering transparency, experimentation and collective learning, which support the responsible diffusion of AI within organisations.
While AI offers promising opportunities to transform working practices, the study underscores that realising these benefits depends on cultivating an AI-literate workforce, implementing clear governance frameworks and creating a culture of transparency and accountability. Without these components, AI risks becoming another unmanaged liability for organisations.
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
6
Notes:
The narrative does not specify a publication date or cite recent events, but references a ‘comprehensive global study’ with methodology matching recent AI adoption trends. No outdated references detected.
Quotes check
Score:
5
Notes:
No direct quotes are included, limiting verification opportunities. Claims are attributed to a study without named experts or sources, reducing traceability.
Source reliability
Score:
7
Notes:
The narrative originates from a Google News-linked article, though the original publisher is unspecified. Content aligns with recent research trends observed in reputable outlets.
Plausability check
Score:
8
Notes:
Findings match known AI adoption patterns (e.g., widespread ChatGPT use, data privacy concerns). Specific statistics align with prior studies cited in the narrative.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative presents plausible, coherent findings consistent with recent AI adoption research. While lacking direct quotes and precise sourcing, methodological details and statistical alignment with known trends support credibility. Confidence is moderated by the unspecified origin and absence of timestamped data.