logo
AI Is Deciphering Animal Speech. Should We Try to Talk Back?

AI Is Deciphering Animal Speech. Should We Try to Talk Back?

Gizmodo17-05-2025
Chirps, trills, growls, howls, squawks. Animals converse in all kinds of ways, yet humankind has only scratched the surface of how they communicate with each other and the rest of the living world. Our species has trained some animals—and if you ask cats, animals have trained us, too—but we've yet to truly crack the code on interspecies communication.
Increasingly, animal researchers are deploying artificial intelligence to accelerate our investigations of animal communication—both within species and between branches on the tree of life. As scientists chip away at the complex communication systems of animals, they move closer to understanding what creatures are saying—and maybe even how to talk back. But as we try to bridge the linguistic gap between humans and animals, some experts are raising valid concerns about whether such capabilities are appropriate—or whether we should even attempt to communicate with animals at all.
Using AI to untangle animal language
Towards the front of the pack—or should I say pod?—is Project CETI, which has used machine learning to analyze more than 8,000 sperm whale 'codas'—structured click patterns recorded by the Dominica Sperm Whale Project. Researchers uncovered contextual and combinatorial structures in the whales' clicks, naming features like 'rubato' and 'ornamentation' to describe how whales subtly adjust their vocalizations during conversation. These patterns helped the team create a kind of phonetic alphabet for the animals—an expressive, structured system that may not be language as we know it but reveals a level of complexity that researchers weren't previously aware of. Project CETI is also working on ethical guidelines for the technology, a critical goal given the risks of using AI to 'talk' to the animals.
Meanwhile, Google and the Wild Dolphin Project recently introduced DolphinGemma, a large language model (LLM) trained on 40 years of dolphin vocalizations. Just as ChatGPT is an LLM for human inputs—taking visual information like research papers and images and producing responses to relevant queries—DolphinGemma intakes dolphin sound data and predicts what vocalization comes next. DolphinGemma can even generate dolphin-like audio, and the researchers' prototype two-way system, Cetacean Hearing Augmentation Telemetry (fittingly, CHAT), uses a smartphone-based interface that dolphins employ to request items like scarves or seagrass—potentially laying the groundwork for future interspecies dialogue.
'DolphinGemma is being used in the field this season to improve our real-time sound recognition in the CHAT system,' said Denise Herzing, founder and director of the Wild Dolphin Project, which spearheaded the development of DolphinGemma in collaboration with researchers at Google DeepMind, in an email to Gizmodo. 'This fall we will spend time ingesting known dolphin vocalizations and let Gemma show us any repeatable patterns they find,' such as vocalizations used in courtship and mother-calf discipline.
In this way, Herzing added, the AI applications are two-fold: Researchers can use it both to explore dolphins' natural sounds and to better understand the animals' responses to human mimicking of dolphin sounds, which are synthetically produced by the AI CHAT system.
Expanding the animal AI toolkit
Outside the ocean, researchers are finding that human speech models can be repurposed to decode terrestrial animal signals, too. A University of Michigan-led team used Wav2Vec2—a speech recognition model trained on human voices—to identify dogs' emotions, genders, breeds, and even individual identities based on their barks. The pre-trained human model outperformed a version trained solely on dog data, suggesting that human language model architectures could be surprisingly effective in decoding animal communication.
Of course, we need to consider the different levels of sophistication these AI models are targeting. Determining whether a dog's bark is aggressive or playful, or whether it's male or female—these are perhaps understandably easier for a model to determine than, say, the nuanced meaning encoded in sperm whale phonetics. Nevertheless, each study inches scientists closer to understanding how AI tools, as they currently exist, can be best applied to such an expansive field—and gives the AI a chance to train itself to become a more useful part of the researcher's toolkit.
And even cats—often seen as aloof—appear to be more communicative than they let on. In a 2022 study out of Paris Nanterre University, cats showed clear signs of recognizing their owner's voice, but beyond that, the felines responded more intensely when spoken to directly in 'cat talk.' That suggests cats not only pay attention to what we say, but also how we say it—especially when it comes from someone they know.
Earlier this month, a pair of cuttlefish researchers found evidence that the animals have a set of four 'waves,' or physical gestures, that they make to one another, as well as to human playback of cuttlefish waves. The group plans to apply an algorithm to categorize the types of waves, automatically track the creatures' movements, and understand the contexts in which the animals express themselves more rapidly.
Private companies (such as Google) are also getting in on the act. Last week, China's largest search engine, Baidu, filed a patent with the country's IP administration proposing to translate animal (specifically cat) vocalizations into human language. The quick and dirty on the tech is that it would intake a trove of data from your kitty, and then use an AI model to analyze the data, determine the animal's emotional state, and output the apparent human language message your pet was trying to convey.
A universal translator for animals?
Together, these studies represent a major shift in how scientists are approaching animal communication. Rather than starting from scratch, research teams are building tools and models designed for humans—and making advances that would have taken much longer otherwise. The end goal could (read: could) be a kind of Rosetta Stone for the animal kingdom, powered by AI.
'We've gotten really good at analyzing human language just in the last five years, and we're beginning to perfect this practice of transferring models trained on one dataset and applying them to new data,' said Sara Keen, a behavioral ecologist and electrical engineer at the Earth Species Project, in a video call with Gizmodo.
The Earth Species Project plans to launch its flagship audio-language model for animal sounds, NatureLM, this year, and a demo for NatureLM-audio is already live. With input data from across the tree of life—as well as human speech, environmental sounds, and even music detection—the model aims to become a converter of human speech into animal analogues. The model 'shows promising domain transfer from human speech to animal communication,' the project states, 'supporting our hypothesis that shared representations in AI can help decode animal languages.'
'A big part of our work really is trying to change the way people think about our place in the world,' Keen added. 'We're making cool discoveries about animal communication, but ultimately we're finding that other species are just as complicated and nuanced as we are. And that revelation is pretty exciting.'
The ethical dilemma
Indeed, researchers generally agree on the promise of AI-based tools for improving the collection and interpretation of animal communication data. But some feel that there's a breakdown in communication between that scholarly familiarity and the public's perception of how these tools can be applied.
'I think there's currently a lot of misunderstanding in the coverage of this topic—that somehow machine learning can create this contextual knowledge out of nothing. That so long as you have thousands of hours of audio recordings, somehow some magic machine learning black box can squeeze meaning out of that,' said Christian Rutz, an expert in animal behavior and cognition and founding president of International Bio-Logging Society, in a video call with Gizmodo. 'That's not going to happen.'
'Meaning comes through the contextual annotation and this is where I think it's really important for this field as a whole, in this period of excitement and enthusiasm, to not forget that this annotation comes from basic behavioral ecology and natural history expertise,' Rutz added. In other words, let's not put the horse before the cart, especially since the cart—in this case—is what's powering the horse.
But with great power…you know the cliché. Essentially, how can humans develop and apply these technologies in a way that is both scientifically illuminating and minimizes harm or disruption to its animal subjects? Experts have put forward ethical standards and guardrails for using the technologies that prioritize the welfare of creatures as we get closer to—well, wherever the technology is going.
As AI advances, conversations about animal rights will have to evolve. In the future, animals could become more active participants in those conversations—a notion that legal experts are exploring as a thought exercise, but one that could someday become reality.
'What we desperately need—apart from advancing the machine learning side—is to forge these meaningful collaborations between the machine learning experts and the animal behavior researchers,' Rutz said, 'because it's only when you put the two of us together that you stand a chance.'
There's no shortage of communication data to feed into data-hungry AI models, from pitch-perfect prairie dog squeaks to snails' slimy trails (yes, really). But exactly how we make use of the information we glean from these new approaches requires thorough consideration of the ethics involved in 'speaking' with animals.
A recent paper on the ethical concerns of using AI to communicate with whales outlined six major problem areas. These include privacy rights, cultural and emotional harm to whales, anthropomorphism, technological solutionism (an overreliance on technology to fix problems), gender bias, and limited effectiveness for actual whale conservation. That last issue is especially urgent, given how many whale populations are already under serious threat.
It increasingly appears that we're on the brink of learning much more about the ways animals interact with one another—indeed, pulling back the curtain on their communication could also yield insights into how they learn, socialize, and act within their environments. But there are still significant challenges to overcome, such as asking ourselves how we use the powerful technologies currently in development.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Inflammatory Foods in Pregnancy Can Up T1D Risk in Kids
Inflammatory Foods in Pregnancy Can Up T1D Risk in Kids

Medscape

time2 hours ago

  • Medscape

Inflammatory Foods in Pregnancy Can Up T1D Risk in Kids

TOPLINE: Each 1-unit increase in the mother's dietary inflammation score during the middle of pregnancy was associated with a 16% higher risk for incident type 1 diabetes (T1D) in offsprings during their first 18 years of life. METHODOLOGY: Researchers in Denmark conducted a prospective study of 67,701 mother-child pairs drawn from a population-based cohort to investigate whether a proinflammatory maternal diet raised the offspring's risk for incident T1D. Maternal dietary intake was collected using a 360-item comprehensive food frequency questionnaire during gestational week 25 of pregnancy. The inflammatory potential of the maternal diet was assessed using an Empirical Dietary Inflammatory Index score, with higher or positive values denoting a more proinflammatory diet (red meats, dairy low fat, pizza, margarine, potatoes, low energy drink, and savoury snacks) and lower or negative values indicating an anti‐inflammatory diet (alliums, tomato, whole grain, coffee, green leafy vegetables, fruit juice, dark meat fish, tea, and natural fruits). Children were followed up for a mean duration of 17.58 years, and the diagnosis of T1D was confirmed through linked national health registries. TAKEAWAY: Among the 67,701 mother-child pairs, 0.42% of children developed T1D, with a median age at diagnosis of 10.2 years. Each 1-unit increase in the maternal Empirical Dietary Inflammatory Index score increased the risk for T1D in offsprings by 16% (hazard ratio [HR], 1.16; 95% CI, 1.02-1.32); this link between the proinflammatory diet and the risk for T1D was seen in both boys and girls. Maternal smoking throughout pregnancy was linked to a lower risk for T1D in offsprings (HR, 0.47); however, a diet high in gluten content was associated with an increased risk for T1D (HR per 10 g/d increase in gluten intake, 1.36). IN PRACTICE: "Collectively, [the study] findings add to the growing body of evidence suggesting that paediatric type 1 diabetes may be influenced by prenatal or early postnatal modifiable factors," the authors wrote. "Mid-pregnancy may be a critical period during which the fetus is particularly susceptible to maternal lifestyle influences," they added. SOURCE: This study was led by Rohina Noorzae, Statens Serum Institut, København, Denmark. It was published online on July 01, 2025, in the Journal of Epidemiology & Community Health. LIMITATIONS: This study was affected by unmeasured confounding factors, such as inflammatory properties of the children's own diets, which could not be ruled out. DISCLOSURES: This study was supported by the European Foundation for the Study of Diabetes/Juvenile Diabetes Research Foundation (JDRF)/Lilly Programme and JDRF. One author was supported by funding from the Novo Nordisk Foundation. The authors reported having no competing interests. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Dupilumab Treatment May Raise Weight in Patients With AD
Dupilumab Treatment May Raise Weight in Patients With AD

Medscape

time2 hours ago

  • Medscape

Dupilumab Treatment May Raise Weight in Patients With AD

TOPLINE: Patients with atopic dermatitis (AD) who underwent treatment with dupilumab showed a mean weight gain of 3.6 kg, with 67% of patients experienced an average increase of 5.9 kg. The findings suggested that dupilumab treatment may be associated with weight changes, potentially due to its effect on interleukin-4 signalling and metabolic regulation. METHODOLOGY: Researchers conducted a retrospective chart review of 30 patients with moderate-to-severe AD (mean age, 40.1 years; 30% women) who were prescribed dupilumab between April 2018 and December 2023. Inclusion criteria required dupilumab treatment for more than 6 months with documented weight measurements within 3 months before initiation and at 3-6 months post-initiation. The analysis included demographic data, prior treatments, disease severity, and weight changes. The mean weight before the commencement of dupilumab was 81.5 kg. Prior systemic treatments included methotrexate (n = 17), ciclosporin (n = 11), azathioprine (n = 7), and mycophenolate mofetil (n = 3). TAKEAWAY: Overall, 67% of patients experienced weight gain, with a mean increase of 5.9 kg. Additionally, 23% of patients showed no weight loss, and 10% of patients lost weight, with a mean loss of 3.7 kg. The overall mean weight gain was 3.6 kg (median, 4 kg; range, -8 to 13 kg). IN PRACTICE: "The blockade of IL-4 [interleukin-4], a cytokine involved in inflammatory responses and metabolic regulation, might contribute to changes in appetite and energy balance," the authors wrote. "While there is evidence suggesting a possible association between dupilumab and weight gain, it is essential to approach this issue with a nuanced perspective. Future studies should aim to disentangle these complex interactions, considering both the biological mechanisms at play and the broader psychosocial factors that impact weight in patients with AD," they added. SOURCE: This study was led by Darren Roche, Department of Dermatology, Tallaght University Hospital, Dublin, Ireland. It was published online on June 30, 2025, in Clinical and Experimental Dermatology. LIMITATIONS: Multiple factors including disease severity, inflammation, lifestyle choices, and psychological stressors could have influenced patient weight. This study was limited by its retrospective design, a small sample size, and a short follow-up period. DISCLOSURES: This research did not receive any specific grant from any funding agency in the public, commercial, or not-for-profit sectors. The authors reported having no conflicts of interest. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Do UTIs Trigger Heart Attacks or Strokes?
Do UTIs Trigger Heart Attacks or Strokes?

Medscape

time2 hours ago

  • Medscape

Do UTIs Trigger Heart Attacks or Strokes?

TOPLINE: Microbiologically confirmed urinary tract infections (UTIs) raised the risk for cardiovascular events such as heart attack and stroke, with both risks peaking in the first 7 days. The risks for heart attack and stroke rose again from 15 to 28 days and 29 to 90 days after the infection, respectively. METHODOLOGY: Researchers in the UK conducted a self-controlled case series analysis to estimate associations between microbiologically confirmed UTIs and the risk for the first acute myocardial infarction (MI) and stroke. They included Welsh residents aged 30 years or older who experienced a first MI (n = 2320) or stroke (n = 2840) along with microbiologically confirmed UTIs between 2010 and 2020. A UTI counted as new only if at least 7 days passed since a previous episode, and UTIs within 7 days were counted as the same episode. The risks for MI and stroke among individuals with a UTI were assessed during a 7-day prerisk period before diagnosis — to capture infections that began prior to consultation — and at 1-7, 8-14, 15-28, and 29-90 days after the infection. The risks measured in these predefined windows were then compared with each patient's risk for stroke or MI at baseline. TAKEAWAY: The risk for MI was elevated in the first 7 days following a UTI (incidence rate ratio [IRR], 2.49; 95% CI, 1.65-3.77) and again during days 15-28 after the infection (IRR, 1.60; 95% CI, 1.10-2.33). Similarly, the risk for stroke was elevated in the first 7 days (IRR, 2.34; 95% CI, 1.61-3.40) and during days 29-90 (IRR, 1.26; 95% CI, 1.05-1.52) after an UTI. Among individuals with a clinically suspected UTI, the adjusted IRR for MI within the first 7 days was 1.26 (95% CI, 0.52-3.05) in those with mixed bacterial growth on culture, 1.83 (95% CI, 1.54-2.18) in those without a urine culture, and 3.69 (95% CI, 2.28-5.96) in those with no bacterial growth on culture; the risk for stroke was also elevated in all these instances. Compared with infections by other bacteria, Escherichia coli infections carried a lower risk for MI but a higher risk for stroke. IN PRACTICE: "We observed an increased risk of MI and stroke immediately following a UTI," the authors wrote. SOURCE: This study was led by Nicola F. Reeve, Cardiff University, Cardiff, Wales. It was published online on June 30, 2025, in BMJ Open. LIMITATIONS: The precise date of the onset of UTI was unknown, and only the date of diagnosis was recorded. Additionally, the study may have missed UTIs for which people did not seek medical care or those unrecorded in the Welsh Results Reporting Service prior to 2015. The use of routine data also meant that outcome ascertainment relied on accurate coding. DISCLOSURES: This study was supported by a grant from the British Heart Foundation. One author declared being a member of the Statistical Advisory Board for BMJ Open. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store