On a recent episode of The Joe Rogan Show, artificial intelligence (AI) experts Jeremie and Eduoard Harris, the CEO and CTO of Gladstone AI respectively, engaged in a discussion about the rapid advancement of AI technology, leaving the host visibly surprised by the pace and implications of developments in the field.

Gladstone AI is a company focused on promoting the responsible development and adoption of AI. During the conversation, Joe Rogan began by directly addressing concerns about the potential risks associated with AI, asking the experts, “If there’s a doomsday clock for AI… what time is it?” Jeremie responded by acknowledging the diverse opinions among experts but suggested that AI could reach human-level capabilities in most areas by 2027 or 2028. He added with a hint of humour, “You’ll be able to have AI on your show and ask it what the doomsday clock is like by then,” to which Rogan lamented that the AI probably wouldn’t laugh at his jokes.

Jeremie cited a study conducted by the lab METR to illustrate the accelerating improvements in AI. The study compared the time taken by AI models to complete specific tasks against that of humans. It revealed that for tasks taking humans less than four minutes, the AI achieved nearly a 100 percent success rate, while for tasks requiring an hour, the AI had a 50 percent success rate. Eduoard noted that these improvements occur roughly every four months, particularly in areas like research and software engineering. They concluded that by 2027, AI could accomplish in a month the kind of work that an AI researcher currently does, albeit with a 50 percent success rate. Eduoard then joked about the idea of Rogan having an AI guest to discuss the doomsday clock itself.

The episode coincided with recent comments from Demis Hassabis, CEO of DeepMind at Google, who has expressed confidence that Artificial General Intelligence (AGI)—an AI with cognitive capabilities comparable to humans—could be developed within the next five to ten years. Hassabis described AGI as a hypothetical stage of AI where the system is capable of creativity and curiosity, qualities that current AI lacks, as it mostly relies on existing data.

Hassabis also predicted that by 2035, AGI could become embedded in daily life. He discussed the potential for AI to develop a form of self-awareness, though he cautioned that recognising such consciousness might be challenging, explaining, “With machines – they’re running on silicon, so even if they exhibit the same behaviours, and even if they say the same things, it doesn’t necessarily mean that this sensation of consciousness that we have is the same thing they will have.”

Highlighting AI’s potential benefits, Hassabis told Time Magazine that advancements could help solve major societal issues such as climate change and disease. “I think some of the biggest problems that face us today as a society, whether that’s climate or disease, will be helped by AI solutions,” he said. Nonetheless, he also emphasised the need for rigorous testing and legal regulations to mitigate risks, noting the difficulty in tracking AI systems to prevent autonomous harmful actions.

Other prominent figures in the AI field present more cautious views. Geoffrey Hinton, a Nobel Prize-winning physicist regarded as the “Godfather of AI,” warned of AI potentially threatening human existence within the next two decades. Having recently left Google due to concerns over AI’s trajectory, Hinton stated in a BBC interview, “The situation we’re in now is that most of the experts in the field think that sometime, within probably the next 20 years, we’re going to develop AIs that are smarter than people. And that’s a very scary thought.” He stressed the importance of government regulations to prevent misuse of AI but expressed scepticism about current political systems’ ability to provide such oversight.

Similarly, Microsoft founder Bill Gates shared his thoughts on AI’s transformative potential and associated risks. In an interview with Jimmy Fallon, Gates acknowledged AI’s capacity to drive innovation but also admitted uncertainty about managing its direction. “I love the way it’ll drive innovation forward, but I think it’s a little bit unknown if we’ll be able to shape it. And so, legitimately, people are like ‘wow, this is a bit scary.’ It’s completely new territory,” he said.

The discussion on The Joe Rogan Show echoed many broader conversations happening within the tech community about balancing AI’s rapid development with the necessity for ethical frameworks and practical regulations to address potential challenges and unintended consequences.

Source: Noah Wire Services