The Center for Countering Disinformation reveals a surge in AI-generated Russian propaganda, including deepfakes and manipulated chatbot narratives, reaching millions across social media platforms since early 2024.
The Center for Countering Disinformation (CCD) has reported a significant increase in the use of artificial intelligence (AI) by Russia in its information warfare against Ukraine. Since the start of 2024, 191 separate Russian information operations involving AI-generated content have been identified across various social media platforms.
These AI-driven campaigns have garnered an estimated 84.5 million views, indicating widespread reach and engagement. The CCD highlighted several forms of AI-generated material employed in these operations, including deepfake videos—where a person’s face or voice is realistically swapped to create false recordings—and partial deepfakes, which combine genuine video footage with AI-generated audio or digitally inserted scenes not present in the original content.
Other notable formats include fake captioned videos that are presented under the guise of reputable media outlets, as well as AI-generated images depicting soldiers or their families. These images are often used to evoke strong emotional responses, promote particular narratives, and increase viewer interaction. On social media platform X, in particular, emotion-enhancing AI content is utilised to advance Russian-favourable viewpoints.
The CCD has also observed that Russian propaganda efforts have extended to manipulating popular AI chatbots, influencing them to replicate disinformation narratives that have been sanitized and repackaged by Russian media sources.
In a statement shared on the Telegram channel, the CCD emphasised the evolving nature of information warfare in the context of the Ukraine conflict: “The information front of the war is constantly evolving – the enemy is looking for new, more effective ways to influence public opinion. That’s why the role of artificial intelligence in Russia’s information operations is steadily increasing.”
Furthermore, recent trends indicate a steep rise in the circulation of AI-generated fakes on social media, with images being the most prevalent type of fabricated content.
This latest report from the Center for Countering Disinformation underscores the growing role of AI technology in shaping narratives and influencing perceptions amid the ongoing conflict between Russia and Ukraine.
Source: Noah Wire Services
- https://uacrisis.org/en/artificial-intelligence-in-the-kremlin-s-information-warfare – This article supports the claim that Russia uses AI extensively in its disinformation campaigns against Ukraine and NATO. It highlights AI’s role in spreading false narratives and manipulating public opinion.
- https://www.atlanticcouncil.org/blogs/new-atlanticist/exposing-pravda-how-pro-kremlin-forces-are-poisoning-ai-models-and-rewriting-wikipedia/ – This investigation reveals how pro-Kremlin forces use AI to expand their global influence by manipulating AI chatbots and other platforms, which aligns with the CCD’s findings about AI’s role in information warfare.
- https://ukraine-analytica.org/the-new-face-of-deception-ais-role-in-the-kremlins-information-warfare/ – This article examines Russia’s use of AI in information warfare during the Ukraine conflict, providing examples of AI-generated disinformation campaigns that match the CCD’s observations.
- https://therecord.media/russia-ukraine-cyber-espionage-artificial-intelligence – This article discusses Russia’s growing use of AI in analyzing data stolen during cyberattacks, which enhances the effectiveness of their operations and supports the broader context of AI use in Russian information warfare.
- https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare – While focused on Ukraine’s capabilities, this paper indirectly supports the broader context of AI use in the conflict by highlighting advancements in AI-driven systems, which contrasts with Russia’s use of AI in information warfare.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative discusses developments specifically in early 2024, indicating very recent information. No signs of recycled or outdated content are evident. The mention of the Telegram statement and specific quantitative data suggests a current report rather than a press release or recycled news.
Quotes check
Score:
8
Notes:
There is one direct quote from the Center for Countering Disinformation’s Telegram channel. Attempts to find an earlier online reference to this exact quote show no obvious prior publication, suggesting this may be an original statement from the CCD in this context.
Source reliability
Score:
7
Notes:
The Center for Countering Disinformation is a known actor specialised in identifying disinformation, but it is not a widely established global news organisation. This provides moderate reliability with some caution. The report is detailed and specific, lending credibility, but external independent verification is limited.
Plausability check
Score:
9
Notes:
The claims about Russia’s increasing use of AI for disinformation in the information warfare context of the Ukraine conflict align with known trends in hybrid warfare and media manipulation. The specificity of formats and platforms (e.g., deepfakes, AI chatbots, social media platform X) is credible and plausible given current technology capabilities and geopolitical context.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative provides a timely and plausible assessment of AI-driven Russian information operations in 2024, supported by a direct statement from the Center for Countering Disinformation. While the source is specialised and not a mainstream global media outlet, the level of detail and current focus, alongside no evidence of recycled content or unverifiable quotes, supports a high confidence in the report’s accuracy.