On March 17, 2025, the Hoover Institution at Stanford University convened a conference titled “Social Media and Democratic Practice,” which examined the complex influences of social media platforms and generative artificial intelligence (AI) on public discourse and the functioning of democracy. The event assembled scholars and experts to discuss both the positive and negative impacts of these digital phenomena on democratic engagement.

The conference addressed the shifting landscape of social media, noting that earlier research on established platforms—such as Facebook, Twitter (now rebranded as X), and YouTube—had found minimal harmful effects from explicitly political content. Filter bubbles were reportedly scarce, and misinformation was considered to have little measurable influence, as most users tend not to focus on political matters in their online interactions. However, the proliferation of new platforms and formats, including large-audience podcasts like Joe Rogan’s 2024 episode featuring Donald Trump which attracted over 50 million downloads, has introduced new dimensions with potentially significant indirect political consequences. Moreover, seemingly apolitical content on traditional platforms, such as vaccine skepticism, carries important political implications that have yet to be thoroughly studied.

Morris P. Fiorina, Senior Fellow at the Hoover Institution and organiser of the conference, described the event as an initial step towards revitalising academic attention on the measurement of political content’s impact and reach on social media. “Today is a first step in measuring impact and reach of political content on social media, something academics have not paid enough attention to in recent years,” Fiorina said.

The conference was facilitated by the Hoover Institution’s Center for Revitalizing American Institutions (RAI) and featured a diverse range of presentations. Among the topics examined were strategic uses of social media by political campaigns—particularly the deployment of apolitical influencers to engage voters with traditionally low political involvement—as well as unsettling findings on the presence of Chinese propaganda within the training data used by large language models.

A significant focus was placed on combating misinformation and conspiracy theories using AI tools. Tom Costello, Assistant Professor at American University, shared findings from a study evaluating the effectiveness of AI-based interventions to counter conspiracy beliefs. Traditional debunking approaches were found to be only about 10 percent effective, largely due to the difficulty of anticipating and addressing the myriad possible false claims. Costello introduced “DebunkBot,” an AI agent designed to engage with believers of varied conspiracy theories—including those related to the 9/11 attacks, the assassination of President John F. Kennedy, and COVID-19 misinformation—by summarising and directly countering their beliefs.

The study involved 761 participants and showed promising results: an initial 40 percent reduction in the strength of conspiracy beliefs was noted after interacting with DebunkBot, with a sustained 20 percent reduction observed two months later. Costello suggested that AI agents like DebunkBot could offer scalable and cost-effective means to mitigate entrenched conspiracy worldviews.

Jennifer Allen, an incoming Assistant Professor at New York University, presented research focusing on vaccine misinformation and scepticism on Facebook, with a particular emphasis on developments since 2016. Allen’s findings highlighted the powerful influence of social media on users’ vaccine decisions in the United States. Despite the introduction of Meta’s third-party fact-check programmes, misinformation continued to spread widely, with some viral posts garnering tens or hundreds of millions of views.

Allen distinguished between posts flagged as misinformation—constituting a small portion of content—and unflagged vaccine sceptical posts. The latter typically recounted anecdotal reports of adverse events occurring shortly after vaccination, such as deaths or injuries among otherwise healthy individuals, without exploring more plausible explanations. This suspiciously framed content evaded Meta’s fact-check protocols and was found to reduce vaccine intentions fifty times more effectively than clearly demonstrable false claims.

To illustrate, Allen noted that a Chicago Tribune story about the death of a healthy physician following vaccination received five times more views than all flagged vaccine misinformation combined. She emphasised that the presence of such unflagged sceptical content requires targeted approaches to intervention. Allen also noted that during the COVID-19 pandemic, numerous reputable news organisations published stories that inadvertently contributed to vaccine scepticism due to evolving scientific understanding at the time.

The conference’s participants also expressed concerns about the increasing incivility characterising online interactions. The anonymity afforded by digital platforms was cited as enabling participants to engage in insults and hostile behaviour without traditional social repercussions. Furthermore, some attendees voiced unease regarding censorship efforts, pointing to examples from the COVID-19 pandemic when discourse surrounding the virus’s origins and vaccine efficacy was often suppressed. They noted that some of the previously censored theories are now gaining scientific acceptance or validity.

To conclude the day’s discussions, representatives from Meta joined a panel discussion moderated by Nate Persily, Professor at Stanford Law School and founding co-director of Stanford’s Cyber Policy Center. The session featured election law expert and Distinguished Visiting Fellow Benjamin Ginsberg and prominent free speech scholar and Senior Fellow Eugene Volokh. The dialogue explored the complex challenges that social media platforms pose for legal frameworks and democratic governance.

Overall, the Hoover Institution’s conference underscored the evolving and multifaceted role of social media and AI technologies in shaping contemporary democratic practice, highlighting areas for future research and policy consideration.

Source: Noah Wire Services