this post was submitted on 16 Oct 2023
23 points (61.9% liked)

Technology

38646 readers
209 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 2 years ago (41 children)

GPT-4 cannot alter its weights once it has been trained so this is just factually wrong.

The bit you quoted is referring to training.

They are not intelligent. They create text based on inputs. That is not what intelligence is, unless you have an extremely dismal view of intelligence that humans are text creation machines with no thoughts, no feelings, no desires, no ability to plan... basically, no internal world at all.

Recent papers say otherwise.

The conclusion the author of that article comes to (LLMs can understand animal language) is.. problematic at the very least. I don't know how they expect that to happen.

[–] [email protected] 1 points 2 years ago (40 children)

In what sense does your link say otherwise? Is a world model the same thing as intelligence?

[–] [email protected] -1 points 2 years ago (11 children)

How would creating a world model from scratch not involve intelligence?

[–] [email protected] 1 points 2 years ago (2 children)

It's not from scratch, it's seeded and trained by humans. That is the intelligence.

[–] [email protected] 2 points 2 years ago

From scratch in the sense that it starts with random weights, and then experiences the world and builds a model of it through the medium of human text. That's because text is computationally tractable for now, and has produced really impressive results. There's no inherent need for text to be used though, similar models have been trained on time series data, and it will soon be feasible to hook up one of these models to a webcam and a body and let it experience the world on its own. No human intelligence required.

Also, your point is kind of silly. Human children learn language from older humans, and that process has been recursively happening for billions of years, all the way through the first forms of life. Do children not have intelligence? Or are you positing some magic moment in human evolution where intelligence just descended from the heavens and blessed us with it?

[–] [email protected] 1 points 2 years ago (1 children)

Just like humans are! Do you know what happens when a human grows up without any training by other humans? They are essentially feral, unable to communicate, maybe even unable to think the way we do.

[–] [email protected] 1 points 2 years ago (1 children)

LLMs do not grow up. Without training they don’t function properly. I guess in this aspect they are similar to humans (or dogs or anything else that benefits from training), but that still does not make them intelligent.

[–] [email protected] 2 points 2 years ago (1 children)

What does it mean to "grow up"? LLMs get better at their tasks during training, just as humans do while growing up. You have to clearly define the terms you use.

[–] [email protected] 1 points 2 years ago (1 children)

You used the term and I was using it with the same usage you were. Why are you quibbling semantics here? It doesn’t change the point.

[–] [email protected] 2 points 2 years ago (1 children)

Yes, I used the term because "growing up" has a well-defined meaning with humans. It doesn't with LLMs, so I didn't use it with LLMs.

[–] [email protected] 1 points 2 years ago (1 children)

Did you have a point or are you only trying to argue semantics?

[–] [email protected] 2 points 2 years ago (1 children)

LLMs do not grow up.

You should ask yourself that question.

[–] [email protected] 0 points 2 years ago (1 children)
[–] [email protected] 2 points 2 years ago

I don't know what your point was in asking that question, you should know yourself.

load more comments (8 replies)
load more comments (36 replies)
load more comments (36 replies)