AI leaders Jeremie and Eduoard Harris forecast rapid AI advancement with human-level tasks achievable by 2027, while industry figures Demis Hassabis, Geoffrey Hinton, and Bill Gates discuss AGI prospects and urgent regulatory needs amid growing risks.
On a recent episode of The Joe Rogan Show, artificial intelligence (AI) experts Jeremie and Eduoard Harris, the CEO and CTO of Gladstone AI respectively, engaged in a discussion about the rapid advancement of AI technology, leaving the host visibly surprised by the pace and implications of developments in the field.
Gladstone AI is a company focused on promoting the responsible development and adoption of AI. During the conversation, Joe Rogan began by directly addressing concerns about the potential risks associated with AI, asking the experts, “If there’s a doomsday clock for AI… what time is it?” Jeremie responded by acknowledging the diverse opinions among experts but suggested that AI could reach human-level capabilities in most areas by 2027 or 2028. He added with a hint of humour, “You’ll be able to have AI on your show and ask it what the doomsday clock is like by then,” to which Rogan lamented that the AI probably wouldn’t laugh at his jokes.
Jeremie cited a study conducted by the lab METR to illustrate the accelerating improvements in AI. The study compared the time taken by AI models to complete specific tasks against that of humans. It revealed that for tasks taking humans less than four minutes, the AI achieved nearly a 100 percent success rate, while for tasks requiring an hour, the AI had a 50 percent success rate. Eduoard noted that these improvements occur roughly every four months, particularly in areas like research and software engineering. They concluded that by 2027, AI could accomplish in a month the kind of work that an AI researcher currently does, albeit with a 50 percent success rate. Eduoard then joked about the idea of Rogan having an AI guest to discuss the doomsday clock itself.
The episode coincided with recent comments from Demis Hassabis, CEO of DeepMind at Google, who has expressed confidence that Artificial General Intelligence (AGI)—an AI with cognitive capabilities comparable to humans—could be developed within the next five to ten years. Hassabis described AGI as a hypothetical stage of AI where the system is capable of creativity and curiosity, qualities that current AI lacks, as it mostly relies on existing data.
Hassabis also predicted that by 2035, AGI could become embedded in daily life. He discussed the potential for AI to develop a form of self-awareness, though he cautioned that recognising such consciousness might be challenging, explaining, “With machines – they’re running on silicon, so even if they exhibit the same behaviours, and even if they say the same things, it doesn’t necessarily mean that this sensation of consciousness that we have is the same thing they will have.”
Highlighting AI’s potential benefits, Hassabis told Time Magazine that advancements could help solve major societal issues such as climate change and disease. “I think some of the biggest problems that face us today as a society, whether that’s climate or disease, will be helped by AI solutions,” he said. Nonetheless, he also emphasised the need for rigorous testing and legal regulations to mitigate risks, noting the difficulty in tracking AI systems to prevent autonomous harmful actions.
Other prominent figures in the AI field present more cautious views. Geoffrey Hinton, a Nobel Prize-winning physicist regarded as the “Godfather of AI,” warned of AI potentially threatening human existence within the next two decades. Having recently left Google due to concerns over AI’s trajectory, Hinton stated in a BBC interview, “The situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.” He stressed the importance of government regulations to prevent misuse of AI but expressed scepticism about current political systems’ ability to provide such oversight.
Similarly, Microsoft founder Bill Gates shared his thoughts on AI’s transformative potential and associated risks. In an interview with Jimmy Fallon, Gates acknowledged AI’s capacity to drive innovation but also admitted uncertainty about managing its direction. “I love the way it’ll drive innovation forward, but I think it’s a little bit unknown if we’ll be able to shape it. And so, legitimately, people are like ‘wow, this is a bit scary.’ It’s completely new territory,” he said.
The discussion on The Joe Rogan Show echoed many broader conversations happening within the tech community about balancing AI’s rapid development with the necessity for ethical frameworks and practical regulations to address potential challenges and unintended consequences.
Source: Noah Wire Services
- https://www.ft.com/content/774901e5-e831-4e0b-b0a1-e4b5b0032fb8 – An article discussing Demis Hassabis’s concerns about AI hype and his prediction that AGI could be developed within the next five to ten years.
- https://time.com/7277608/demis-hassabis-interview-time100-2025/ – An interview with Demis Hassabis where he discusses the potential of AGI to solve major societal issues and the need for rigorous testing and regulations.
- https://www.the-independent.com/tech/ai-deepmind-artificial-general-intelligence-b2332322.html – An article reporting on Demis Hassabis’s prediction that human-level AI may be just a few years away, highlighting the rapid progress in AI development.
- https://www.foxbusiness.com/technology/demis-hassabis-google-deepmind-ceo-says-human-level-ai-years – A report on Demis Hassabis’s statement that human-level AI could emerge within a few years, emphasizing the accelerating progress in AI technology.
- https://www.shacknews.com/article/143502/google-deepmind-demis-hassabis-agi-10-years – An article discussing Demis Hassabis’s belief that AGI will emerge in the next decade, with potential to match human capabilities in various tasks.
- https://www.thevocalnews.com/auto-tech/deepmind-ceo-agi-prediction-2025/cid16607631.htm – A report on Demis Hassabis’s prediction that AGI may arrive in 5–10 years, while current AI still lacks imagination and consciousness.
- https://www.dailymail.co.uk/news/article-14666551/Artificial-intelligence-Joe-Rogan-doomsday.html?ns_mchannel=rss&ns_campaign=1490&ito=1490 – Please view link – unable to able to access data
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative engages with current AI developments and discussions involving prominent figures like Demis Hassabis and Geoffrey Hinton, reflecting recent conversations in the tech community. However, without specific dates or events tied to the episode itself, it’s difficult to pinpoint its exact relevance or novelty.
Quotes check
Score:
7
Notes:
While the quotes from Jeremie Harris, Demis Hassabis, and Geoffrey Hinton are attributed correctly and align with known statements from these individuals, the lack of specific dates for the Joe Rogan show episode makes it harder to verify if these quotes are original or have been used elsewhere.
Source reliability
Score:
6
Notes:
The narrative originates from the Daily Mail, which can be considered a mainstream publication with variable reliability. The inclusion of quotes from credible figures like Hassabis and Hinton adds some credibility, but could also be selectively presented.
Plausability check
Score:
9
Notes:
Claims regarding AI advancements and timelines are plausible and align with ongoing discussions in the tech community. The predictions from experts like Hassabis and Hinton contribute to the narrative’s plausibility.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
While the narrative engages with current AI discussions and includes plausible claims from prominent experts, the lack of specific dates for the Joe Rogan episode and variable reliability of the source leaves some uncertainty.