Scientists believe that a breakthrough in understanding dolphin communication may be on the horizon, thanks to an innovative artificial intelligence model developed by Google. The initiative, named DolphinGemma, aims to decode the complex vocalisations of dolphins and potentially enable humans to “speak dolphin” in the future.

The model has been designed using an extensive database that includes the largest collection of dolphin sounds ever assembled, comprising clicks, whistles, and various vocalisations recorded over several years by the Wild Dolphin Project. According to Dr Denize Herzing, the founder and research director of the project, the goal of feeding dolphin sounds into this AI model is to uncover any underlying patterns and subtleties that may exist in dolphin communication—a final frontier in understanding these intelligent marine mammals.

Speaking to The Telegraph, Dr Herzing remarked, “We do not know if animals have words. Dolphins can recognise themselves in the mirror, they use tools, they’re smart but language is still the last barrier.” Dr Thas Starner, a scientist at Google DeepMind, elaborated that the model would facilitate researchers in uncovering hidden structures and potential meanings within dolphin communication, significantly reducing the human effort previously required for such analysis. “We’re just beginning to understand the patterns within the sounds,” he stated.

The potential for humans to understand dolphins is not entirely novel; previous research has revealed that dolphins possess regional accents. Bottlenose dolphins are known to create signature whistles that function similarly to names, which are unique to each individual. Recent studies have illuminated how these whistles are not just individual identifiers but may also be influenced by environmental factors and social structures.

In research conducted by Gabriella La Manna and her team at the University of Sassari in Italy, they scrutinised the whistling behaviours of common bottlenose dolphins across six populations in the Mediterranean Sea. Their efforts encompassed 188 hours of recorded dolphin interactions that allowed them to analyse variations in acoustic features such as the duration and pitch of the sounds. This extensive study led them to identify 168 unique signature whistles. Their findings suggested that the dolphins’ environment, particularly the presence of sea grass, directly affects the characteristics of their whistles. For instance, dolphins in areas with sea grass produced higher-pitched and shorter whistles compared to those living in muddy seabeds. Furthermore, in smaller populations, the identified whistles exhibited more pitch variation than in larger groups.

Last year, Dr Arik Kershenbaum, an expert in animal communication from the University of Cambridge, made notable strides in interpreting dolphin communication. He identified repeated patterns in the sounds made by dolphins, proposing that specific whistles could be analogous to a dolphin’s name. Dr Kershenbaum remarked on the complexity of studying dolphin communication, stating, “With dolphins, it is really hard because we don’t even know the framework of what their language would be if they had one.” While he noted the distinctiveness of certain whistles, he acknowledged the ongoing mystery surrounding whether different sounds would be perceived as similar by dolphins themselves.

In summary, the convergence of AI technology and marine biology is set to advance our understanding of dolphin communication, and scientists remain optimistic about the potential for meaningful breakthroughs that could eventually bridge the gap between human and dolphin dialogue. The research continues to unfold, promising to reveal more about these complex creatures and the linguistic possibilities that may lie within their vocalisations.

Source: Noah Wire Services