Perspectivist

joined 2 months ago
[–] [email protected] 15 points 1 month ago (1 children)

that no one’s doing anything about

The Ocean Cleanup

[–] [email protected] 0 points 1 month ago (1 children)

A linear regression model isn’t an AI system.

The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.

[–] [email protected] 0 points 1 month ago (3 children)

I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (5 children)

What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.

[–] [email protected] 1 points 1 month ago

The chess opponent on Atari is AI too. I think the issue is that when most people hear "intelligence," they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.

[–] [email protected] 3 points 1 month ago

En tunne itseni lisäksi ainuttakaan Linux käyttäjää ja minullakin se on asennettuna ainoastaan tuolle hyvin vähällä käytöllä olevalle pelikoneelle. Päivvittäisessä käytössä olevalla läppärillä käytän OSX.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (15 children)

AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it's the correct term nevertheless.

[–] [email protected] 1 points 1 month ago (1 children)

Never even heard of it.

[–] [email protected] -3 points 1 month ago* (last edited 1 month ago)

You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it.

From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.

You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.

[–] [email protected] 4 points 1 month ago

I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.

[–] [email protected] 2 points 1 month ago

How is "not understanding things" preventing an LLM from bringing up a point you hadn't thought of before?

view more: ‹ prev next ›