this post was submitted on 14 Apr 2025
37 points (97.4% liked)

technology

23857 readers
259 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 2 months ago (11 children)

. It's an audio-in, audio-out model. So after providing it with a dolphin vocalization, the model does just what human-centric language models do—it predicts the next token. If it works anything like a standard LLM, those predicted tokens could be sounds that a dolphin would understand.

It's a cool tech application, but all they're technically doing right now is training an AI to sound like dolphins.. Unless they can somehow convert this to actual meaning/human language, I feel like we're just going to end up with an equally incomprehensible Large Dolphin Language Model.

[–] [email protected] 2 points 2 months ago (6 children)

An emergent behavior of LLMs is the ability to translate between languages. IE, we taught something Spanish, and we taught it English, and it automatically knows how to translate between them. If we taught it English and dolphin, it should be able to translate anything with shared meaning.

[–] [email protected] 7 points 2 months ago (3 children)

Is it emergent?! I've never seen this claim. Where did you see or read this? Do you mean by this that it can just work in any trained language and accept/return tokens based on the language input and/or requested?

[–] [email protected] 1 points 2 months ago (2 children)

I mean, we don't have to teach them to translate. That was unexpected by people, but not really everyone.

https://www.asapdrew.com/p/ai-emergence-emergent-behaviors-artificial-intelligence

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

Yeah that article is so full of bullshit that I don't believe it's main claim. Comparing LLM's to understanding built by children, saying it makes "creative content", that LLM's do "chain of thought" without prompting. It presents the two sides as at all equal in logical reasoning: as if the mystical intepretation is on the same level of rigor as the systems explanation. Sorry, but I'm entirely unconvinced by this article that I should take this seriously. There are thousands of websites that do translation with natural language taking examples from existing media and such (duolingo did this for a long time, and sold those results), literally just mining that data gives the basis to easily build a network of translations that seem like natural language with no mysticism

[–] [email protected] 2 points 2 months ago

It can translate because the languages have already been translated. No amount of scraping websites can translate human language to dolphin.

load more comments (2 replies)
load more comments (6 replies)