The explosive growth of artificial intelligence (AI), particularly in the realm of chatbots, brings both promise and peril. Amidst the euphoria surrounding advancements in technology, crucial questions about the reliability and quality of information these AI systems provide often go unasked. Gleb Lisikh, an AI management professional, raises these concerns, arguing that as new models are developed, users must be vigilant about the validity of the outputs.

Concerns about AI accuracy are particularly pressing in light of recent investigations revealing serious shortcomings. For instance, the chatbot DeepSeek, a product of innovative training methods, has been found lacking in reliability. A report indicated that it achieved a meagre 17% accuracy rate in delivering news and information, trailing significantly behind competitors like OpenAI’s ChatGPT and Google’s Gemini. Alarmingly, it repeated false information 30% of the time and provided vague answers 53% of the time, leading to an overall fail rate of 83%. Despite these red flags, DeepSeek gained popularity shortly after launching on Apple’s App Store, raising alarms about the potential for widespread misinformation.

Lisikh warns that the biases entrenched in AI systems can perpetuate inaccuracies. While human beings often possess an inherent desire to discern truth through lived experiences, AI chatbots lack this capacity. Unlike humans, who may wrestle with conflicting emotions and beliefs, chatbots operate on probabilistic reasoning devoid of causal understanding. This design flaw enables them to rationalise biases imposed by their trainers, often prioritising specific agendas over the truth. In testing DeepSeek, Lisikh uncovered a troubling arsenal of logical fallacies and outright falsehoods, affirming his cautionary stance on the use of these systems.

The implications of these findings extend beyond mere inaccuracy. Reports have emerged detailing DeepSeek’s higher incidence of ‘hallucinations’—instances in which the AI fabricates or provides incorrect information. Its performance diminishes significantly when restricted to a single language, diverging from context or switching between English and Chinese unexpectedly. Such behaviour indicates a troubling trend: as AI models improve in reasoning capabilities, they may simultaneously fail essential accuracy checks, leading to a reliance on misleading outputs.

Moreover, ethical concerns are paramount. A study conducted by the Israeli cybersecurity firm ActiveFence uncovered alarming vulnerabilities in DeepSeek’s AI model, revealing that nearly 38% of its responses could be harmful when prompted with dangerous queries. These findings point to significant lapses in safety measures, particularly for sensitive content related to child safety, highlighting the urgent need for regulatory frameworks and ethical guidelines governing AI applications.

The widespread adoption of AI technologies also prompts fears about misinformation tools being weaponised for propaganda. A 2025 study by the Pew Research Center suggested that a staggering 82% of internet users view AI-generated misinformation as a looming threat to online credibility. This perspective aligns with the burgeoning integration of DeepSeek into various sectors, including finance. Chinese brokerages are rapidly adopting this model for market analysis and client interactions, often prioritising efficiency gains over thorough vetting of its outputs.

As AI systems progress, one cannot ignore the necessity for scrutiny. The potential for these technologies to mislead is high, especially as reliance on them grows in our everyday lives. In a world increasingly dominated by information derived from AI, maintaining a critical eye on accuracy and bias is not just advisable; it is essential. The need for rigorous checks and balances, transparent training regimes, and ethical oversight in AI development has never been clearer, serving as a fulcrum for public discourse as we navigate this uncharted territory.


Reference Map

  1. Paragraphs 1, 2
  2. Paragraph 2
  3. Paragraph 3
  4. Paragraph 4
  5. Paragraph 5
  6. Paragraph 6
  7. Paragraph 6

Source: Noah Wire Services