Dolphin LLM
hendrikboom writes:
Google, in collaboration with Georgia Tech and the Wild Dolphin Project, has announced DolphinGemma, an AI model designed to analyze and generate dolphin vocalizations. With about 400 million parameters, the model is compact enough to run on Google Pixel phones used in ocean fieldwork, allowing researchers to process dolphin sounds in real-time.
DolphinGemma builds on Google's lightweight Gemma model family, optimized for on-device use. It was trained on an extensive, labeled dataset collected over four decades by the Wild Dolphin Project - the longest-running underwater dolphin research initiative. These audio and video records capture generations of Atlantic spotted dolphins in their natural habitat, complete with behavioral context and individual dolphin identities.
The goal is ambitious: to detect the structure and potential meaning in dolphin sounds - including signature whistles used between mothers and calves, or the aggressive "squawks" exchanged during disputes. DolphinGemma functions like a language model for dolphins, predicting likely vocalizations based on prior sequences, helping researchers uncover patterns and hidden rules in their communication.
and here's the dolphingemma site
Will this LLM generate AI spam for dolphins? And is there any way we can know what it's saying?
Additional discussion on the matter at The Guardian: We're close to translating animal languages - what happens then?
Processed by jelizondo
Read more of this story at SoylentNews.