this post was submitted on 17 Jul 2023
503 points (93.1% liked)
Technology
73331 readers
4675 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
100% if an AI can do the job just as well (or better) then there's no reason we should be making a person do it.
Part of the problem with AI is that it requires significant skill to understand where AI goes wrong.
As a basic example, get a language model like ChatGPT to edit writing. It can go very wrong, removing the wrong words, changing the tone, and making mistakes that an unlearned person does not understand. I’ve had foreign students use AI to write letters or responses and often the tone is all off. That’s one thing but the student doesn’t understand that they’ve written a weird letter. Same goes with grammar checking.
This sets up a dangerous scenario where, to diagnose the results, you need to already have a deep understanding. This is in contrast to non-AI language checkers that are simpler to understand.
Moreover as you can imagine the danger is that the people who are making decisions about hiring and restructuring may not understand this issue.
The good news is this means many of the jobs AI is "taking" will probably come back when people realize it isn't actually as good as the hype implied
It’s just that I fear that realisation may not filter down.
You honestly see it a lot in industry. Companies pay $$$ for things that don’t really produce results. Or what they consider to be “results” changes. There are plenty of examples of lowering standards and lowering quality in virtually every industry. The idea that people will realise the trap of AI and reverse is not something I’m enthusiastic about.
In many ways AI is like pseudoscience. It’s a black box. Things like machine learning don’t tell you “why” it works. It’s just a black box. ChatGPT is just linear regression on language models.
So the claim that “good science” prevails is patently false. We live in the era of progressive scientific education and yet everywhere we go there is distrust in science, scientific method, critical thinking, etc.
Do people really think that the average Joe is going to “wake up” to the limitations of AI? I fear not.