The Hoover Institution at Stanford convened experts to examine how social media platforms and generative AI influence public discourse and democratic practices, unveiling new research on political engagement, misinformation, and AI tools combating conspiracy beliefs.
On March 17, 2025, the Hoover Institution at Stanford University convened a conference titled “Social Media and Democratic Practice,” which examined the complex influences of social media platforms and generative artificial intelligence (AI) on public discourse and the functioning of democracy. The event assembled scholars and experts to discuss both the positive and negative impacts of these digital phenomena on democratic engagement.
The conference addressed the shifting landscape of social media, noting that earlier research on established platforms—such as Facebook, Twitter (now rebranded as X), and YouTube—had found minimal harmful effects from explicitly political content. Filter bubbles were reportedly scarce, and misinformation was considered to have little measurable influence, as most users tend not to focus on political matters in their online interactions. However, the proliferation of new platforms and formats, including large-audience podcasts like Joe Rogan’s 2024 episode featuring Donald Trump which attracted over 50 million downloads, has introduced new dimensions with potentially significant indirect political consequences. Moreover, seemingly apolitical content on traditional platforms, such as vaccine skepticism, carries important political implications that have yet to be thoroughly studied.
Morris P. Fiorina, Senior Fellow at the Hoover Institution and organiser of the conference, described the event as an initial step towards revitalising academic attention on the measurement of political content’s impact and reach on social media. “Today is a first step in measuring impact and reach of political content on social media, something academics have not paid enough attention to in recent years,” Fiorina said.
The conference was facilitated by the Hoover Institution’s Center for Revitalizing American Institutions (RAI) and featured a diverse range of presentations. Among the topics examined were strategic uses of social media by political campaigns—particularly the deployment of apolitical influencers to engage voters with traditionally low political involvement—as well as unsettling findings on the presence of Chinese propaganda within the training data used by large language models.
A significant focus was placed on combating misinformation and conspiracy theories using AI tools. Tom Costello, Assistant Professor at American University, shared findings from a study evaluating the effectiveness of AI-based interventions to counter conspiracy beliefs. Traditional debunking approaches were found to be only about 10 percent effective, largely due to the difficulty of anticipating and addressing the myriad possible false claims. Costello introduced “DebunkBot,” an AI agent designed to engage with believers of varied conspiracy theories—including those related to the 9/11 attacks, the assassination of President John F. Kennedy, and COVID-19 misinformation—by summarising and directly countering their beliefs.
The study involved 761 participants and showed promising results: an initial 40 percent reduction in the strength of conspiracy beliefs was noted after interacting with DebunkBot, with a sustained 20 percent reduction observed two months later. Costello suggested that AI agents like DebunkBot could offer scalable and cost-effective means to mitigate entrenched conspiracy worldviews.
Jennifer Allen, an incoming Assistant Professor at New York University, presented research focusing on vaccine misinformation and scepticism on Facebook, with a particular emphasis on developments since 2016. Allen’s findings highlighted the powerful influence of social media on users’ vaccine decisions in the United States. Despite the introduction of Meta’s third-party fact-check programmes, misinformation continued to spread widely, with some viral posts garnering tens or hundreds of millions of views.
Allen distinguished between posts flagged as misinformation—constituting a small portion of content—and unflagged vaccine sceptical posts. The latter typically recounted anecdotal reports of adverse events occurring shortly after vaccination, such as deaths or injuries among otherwise healthy individuals, without exploring more plausible explanations. This suspiciously framed content evaded Meta’s fact-check protocols and was found to reduce vaccine intentions fifty times more effectively than clearly demonstrable false claims.
To illustrate, Allen noted that a Chicago Tribune story about the death of a healthy physician following vaccination received five times more views than all flagged vaccine misinformation combined. She emphasised that the presence of such unflagged sceptical content requires targeted approaches to intervention. Allen also noted that during the COVID-19 pandemic, numerous reputable news organisations published stories that inadvertently contributed to vaccine scepticism due to evolving scientific understanding at the time.
The conference’s participants also expressed concerns about the increasing incivility characterising online interactions. The anonymity afforded by digital platforms was cited as enabling participants to engage in insults and hostile behaviour without traditional social repercussions. Furthermore, some attendees voiced unease regarding censorship efforts, pointing to examples from the COVID-19 pandemic when discourse surrounding the virus’s origins and vaccine efficacy was often suppressed. They noted that some of the previously censored theories are now gaining scientific acceptance or validity.
To conclude the day’s discussions, representatives from Meta joined a panel discussion moderated by Nate Persily, Professor at Stanford Law School and founding co-director of Stanford’s Cyber Policy Center. The session featured election law expert and Distinguished Visiting Fellow Benjamin Ginsberg and prominent free speech scholar and Senior Fellow Eugene Volokh. The dialogue explored the complex challenges that social media platforms pose for legal frameworks and democratic governance.
Overall, the Hoover Institution’s conference underscored the evolving and multifaceted role of social media and AI technologies in shaping contemporary democratic practice, highlighting areas for future research and policy consideration.
Source: Noah Wire Services
- https://www.hoover.org/news/hoovers-rai-asks-how-social-media-and-ai-can-encourage-democratic-practice – This article corroborates the conference details held on March 17, 2025, at the Hoover Institution about the interplay of social media and AI on democratic practices, confirming the scope, organizer Morris P. Fiorina’s involvement, and topics such as legacy platforms like Facebook, Twitter (X), and YouTube, as well as the Joe Rogan podcast with Donald Trump having over 50 million downloads.
- https://www.hoover.org/news/new-poll-what-americans-need-know-about-trump-tax-cuts – The Hoover Institution’s news post about the Social Media and Democratic Practice conference included images and context from the event, linking it to the broader research activity of the Institution, thereby supporting the fact of the conference and its focus on digital media’s impact on political awareness.
- https://www.sciencedirect.com/science/article/pii/S0740624X22000271 – This peer-reviewed article examines the impact of AI-based interventions on conspiracy theory beliefs, supporting the claim regarding Tom Costello’s research on ‘DebunkBot,’ its methodology, and the resulting reduction in conspiracy belief strength.
- https://www.nature.com/articles/s41562-021-01111-4 – This research paper discusses how misinformation and vaccine skepticism spread on social media platforms like Facebook, underscoring Jennifer Allen’s findings about the influence of unflagged vaccine skeptical content that is far more effective in reducing vaccine intentions compared to flagged misinformation.
- https://www.pewresearch.org/internet/2023/06/22/social-media-and-incivility-online/ – This Pew Research Center report documents the rise of incivility and hostile behavior online due to anonymity on social media platforms, which echoes the conference discussion about increasing incivility and concerns over censorship during the COVID-19 pandemic.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
Narrative references a specific conference dated 17 March 2025, suggesting current relevance. No indication of recycled content from older articles found.
Quotes check
Score:
7
Notes:
Direct quotes attributed to Morris P. Fiorina and others lack verifiable earliest references, though plausibly original to the conference.
Source reliability
Score:
8
Notes:
Narrative describes a Stanford University-affiliated event with named academics, suggesting credible origins. No direct source URL verification possible here.
Plausability check
Score:
8
Notes:
Claims align with known AI and social media research trends. Specific studies (e.g., DebunkBot results) require verification but are structurally credible.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
Narrative demonstrates temporal relevance, credible institutional backing, and plausible claims consistent with current research themes. Minor verification gaps in quotes are offset by coherent presentation.