Scientists Are Beginning to Learn the Language of Bats and Bees Using AI

The new field of digital bioacoustics is using machine learning to try decipher animal speak, including honeybee toots and quacks and whoops.

Illustration of a Bohr atom model spinning around the words Science Quickly with various science and medicine related icons around the text

Hi Science, Quickly listeners. This is Jeff DelViscio, executive producer of the show. 

The whole podcast team is out in the field, so while we’re away, we’re bringing back a few amazing oldies from the archive. 

AI is everywhere these days—and it’s being used, or at least some are trying to use it, for just about anything you can think of. 


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


We all probably know about AI of the ChatGPT variety now. But what about AI for animals? Specifically, science is starting to apply AI to understanding animal language. 

Tech editor Sophie Bushwick and producer Kelso Harper bring us this fascinating look into just what machine learning is discovering about how animals talk to one another. And, perhaps, this new research might just start to break down the divide between us and the rest of the animal kingdom. 

The episode was first aired on March 17, 2023. 

Enjoy!

[CLIP: Bird songs]

Kelso Harper: Have you ever wondered what songbirds are actually saying to each other with all of their chirping? 

Sophie Bushwick: Or what your cat could possibly be yowling about so early in the morning?

[CLIP: Cat meowing]

Harper: Well, powerful new technologies are helping researchers decode animal communication. And even begin to talk back to nonhumans.

Bushwick: Advanced sensors and artificial intelligence might have us at the brink of interspecies communication.

[CLIP: Show theme music]

Harper: Today, we're talking about how scientists are starting to communicate with creatures like bats and honeybees and how these conversations are forcing us to rethink our relationship with other species. I'm Kelso Harper, multimedia editor at Scientific American.

Bushwick: And I'm Sophie Bushwick, tech editor.

Harper: You're listening to Science, Quickly. Hey, Sophie.

Bushwick: Hi, Kelso.

Harper: So you recently chatted with the author of a new book called The Sounds of Life: How Digital Technology is Bringing us Closer to the Worlds of Animals and Plants.

Bushwick: Yeah, I had a great conversation with Karen Bakker, a professor at the University of British Columbia and a fellow at the Harvard Radcliffe Institute for Advanced Study. Her book explores how researchers are leveraging new tech to understand animal communication even in the burgeoning field of digital bioacoustics.

Harper: Digital bioacoustics. Huh. So what does that actually look like? Are we trying to make animals talk like humans using translation collars like in the movie Up?

[CLIP: From Walt Disney’s Up]

Doug the Dog: My name is Doug. My master made me this caller so that I may talk squirrel.

Bushwick: Not quite, but that is similar to how researchers first started trying to communicate with animals in the seventies and eighties, which is to say they attempted to teach the animals human language. But many scientists today have moved away from this human-centric approach, and instead they want to understand animal communication on its own terms.

Harper: So instead of trying to teach birds to speak English, we're deciphering what they're already saying to each other in birdish or birdese.

Bushwick: Right, exactly. This new field of digital bioacoustics uses portable field recorders that are like mini microphones you can put pretty much anywhere—in trees, on mountaintops, even on the backs of whales and birds.

They record sound 24/7 and create oodles of data, which is where artificial intelligence comes in. Researchers can apply natural language processing algorithms like the ones used by Google translate to detect patterns in these recordings and begin to decode what animals might be saying to each other.

Harper: Wow, that is wild. So what have scientists learned from this so far?

Bushwick: One of the examples Karen gives in her book is about Egyptian fruit bats. A researcher named Yossi Yovel recorded audio and video of nearly two dozen bats for two and a half months. His team adapted a voice recognition program to analyze 15,000 of the sounds, and then the algorithm correlated specific sounds to certain social interactions in the videos, like fighting over food or jockeying for sleeping positions.

So this research, combined with some other related studies, has revealed that bats are capable of complex communication.

Harper: All I remember being taught was that bats make high-pitched sounds to echolocate as they fly around, but it sounds like there's a lot more to it than that.

Bushwick: Yes, definitely. We've learned that bats have what are known as signature calls which act like individual names.

Harper: Whoa.

Bushwick: And they distinguish between sexes when they communicate with each other.

Harper: What?

Bushwick: They have dialects. They argue over food and sleeping positions. They socially distance when they're ill.

Harper: Are you serious?

Bushwick: Yeah. They're better at it in some ways than we are. So one of the coolest things is that bat mothers use their own version of motherese with their young.

So when humans talk to cute little babies, we use motherese. We raise our pitch, you know, like, oh, what a cute little sweet potato. And bats also use a special tone to talk to their young, but they lower their pitch instead.... Oh, what a cute little sweet potato.

This makes the bat babies babble back, and it might help them learn specific words or referential sounds the same way that motherese helps human babies acquire language.

Harper: That is bonkers. Or I don't know. Is it? Do I just think it is because I've been cotton the trap of thinking that humans are somehow completely different from other animals and we have a, I don't know, uniquely sophisticated way of communicating. Are we learning that we might not be quite as special as we thought?

Bushwick: Kind of, yeah. This work is raising a lot of important philosophical questions and ethical ones, too. For a long time, philosophers said we would never be able to determine if animals can be said to have language, let alone be able to decipher or speak it. But these new technologies have really changed the game.

One thing that Karen said during our interview is that we can't talk to bats, but our computers can.

You and I can't hear, let alone keep up with the fast, high-pitched communication between bats. And we certainly can't speak it ourselves, but electronic sensors and speakers can.

And with artificial intelligence, we can begin to trace patterns in animal communication that we never could before.

People still debate the question of if we can call it animal language, but it's becoming clear that animals have much more complex ways of communicating than we thought before.

Harper: Apparently. What other examples of this can you find in the book?

Bushwick: Karen also told me the story of a bee researcher named Tim Landgraf. So honeybee communication—very different from our own. They use not just sounds but also the movements of their bodies to speak. So have you heard of the famed waggle dance?

Harper: Yeah. Is that the one where the bees shake their fuzzy little butts in different directions? Or explain where to find nectar?

Bushwick: That's the one. But the waggle dance is just one form of honeybee communication. Landgraf and his team used a combination of natural language processing. Like in the bat study and computer vision, which analyzes imagery, to decipher both the sounds and the wiggles of bee chatter. They're now able to track individual bees and predict the impact of what one bee says to another.

Harper: That is so cool.

Bushwick: Yeah, they have all sorts of specific signals that the researchers have given these funny names. So bees toot [CLIP: Bee toot sound] and quack [CLIP: Bee quack sound]. They have a whooping sound for danger [CLIP: Bee whooping sound], piping signals related to swarming [CLIP: Bee piping sound], and they use a hush or stop signal to get the hive to quiet down [CLIP: Bee hush sound].

Harper: Wow. I love the image of a quacking bee.

Bushwick: Landgraf's next step was to encode what they learned into a robotic bee, which he called—drum roll, please—Robobee.

Harper: Classic.

Bushwick: After seven or eight prototypes, they had a robobee that could actually go into a hive, and then it would emit commands like the stop signal and the bees would obey.

Harper: That is bananas. Just one step closer to the very science based world of B-movie.

Bushwick: The height of cinematic achievement.

[CLIP: From DreamWorks Animation’s Bee Movie

Bee: I gotta say something. You like jazz?

Harper: Oh, well, before we wrap up, is there anything else from your conversation with Karen that you'd like to add?

Bushwick: I'd love to end on one quote from her. She said, The invention of digital bioacoustics is analogous to the invention of the microscope.

Harper: Wow.

Bushwick: The microscope opened up an entire new world to us and laid the foundation for countless scientific breakthroughs visually. And that's what digital bioacoustics is doing with audio for the study of animal communication. Karen says it's like a, "planetary scale hearing aid that enables us to listen anew with both our prosthetically enhanced ears and our imagination."

Harper: What a great analogy.

Bushwick: Yeah, it'll be really interesting to see where the research goes from here and how it might change the way we think about the so-called divide between humans and non-humans.

Harper: Yeah, I'm already questioning everything I thought I knew. Well, Sophie, thank you so much for sharing all of this with us.

Bushwick: Squeak, squeak, buzz, buzz, my friends.

Harper: And the buzz, buzz, right back to you.

If you're still curious, you can read more about this on our site and Sophie's Q&A with Karen Bakker. And of course, in Karen's new book, The Sounds of Life. Thanks for tuning in to Science, Quickly. This podcast is produced by Jeff DelViscio, Tulika Bose, and me, Kelso Harper. Our theme music was composed by Dominic Smith.

Special thanks today to Martin Bencsik of Nottingham Trent University and James Nieh at the University of California, San Diego, for providing excellent examples of honeybee toots and quacks and woops.

Bushwick: Don't forget to subscribe. And for more in-depth science news features, podcasts and videos, head to ScientificAmerican.com. For Scientific American's Science, Quickly, I'm Sophie Bushwick.

Harper: And I'm Kelso Harper. See you next time.

Harper: I'm so excited. Also, I will be turning your "boo-boo ba-ba sweet potato" into [lowers pitch] "boo-boo ba-ba sweet potato."

Bushwick: Yes. That's all I wanted.

[Image credit: Gerard Lacz Images/Getty Images]

Sophie Bushwick is tech editor at Scientific American. She runs the daily technology news coverage for the website, writes about everything from artificial intelligence to jumping robots for both digital and print publication, records YouTube and TikTok videos and hosts the podcast Tech, Quickly. Bushwick also makes frequent appearances on radio shows such as Science Friday and television networks, including CBS, MSNBC and National Geographic. She has more than a decade of experience as a science journalist based in New York City and previously worked at outlets such as Popular Science,Discover and Gizmodo. Follow Bushwick on X (formerly Twitter) @sophiebushwick

More by Sophie Bushwick

Kelso Harper is an award-winning multimedia editor at Scientific American. As a producer, editor and host, they work on short documentaries, social videos and Scientific American's podcast Science Quickly. They have a bachelor's in chemistry from Johns Hopkins University and a master's in science writing from MIT. Previously, they worked with WIRED, Science, Popular Mechanics, and MIT News. Follow them on LinkedIn and Instagram.

More by Kelso Harper

Jeff DelViscio is currently chief multimedia editor/executive producer at Scientific American. He is former director of multimedia at STAT, where he oversaw all visual, audio and interactive journalism. Before that, he spent more than eight years at the New York Times, where he worked on five different desks across the paper. He holds dual master's degrees from Columbia University in journalism and in earth and environmental sciences. He has worked aboard oceanographic research vessels and tracked money and politics in science from Washington, D.C. He was a Knight Science Journalism Fellow at the Massachusetts Institute of Technology in 2018. His work has won numerous awards, including two News and Documentary Emmy Awards.

More by Jeffery DelViscio