this post was submitted on 20 Jul 2023
65 points (91.1% liked)

Technology

71666 readers
3357 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 14 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 2 years ago (1 children)

When they first launched the bing AI powered by GPT I used it for everything, then it became pretty clear they nerfed it and I've been waiting for a competitor to catch up. Bard's gotten a little better, but it hallucinates way worse still, making up answers.

I'm secretly hoping for one of these open-source projects like Llama 2 or Orca to lead to a totally unrestricted chatbot even if it's short-lived

[–] [email protected] 1 points 2 years ago

Man I'd love to have the original bing ai back. Those hallucinations were something else. Probably a liability issue but I wish they had it available with a disclaimer.

[–] [email protected] 4 points 2 years ago (1 children)

Yep, definitely. I have a plus subscription, and stuff that was easy for it just a few months ago now seems to take several back-and-forths to barely approach similar results.

Science content is where I noticed the most degradation. It just stares at me using blank “it’s not in my training data” answers to questions that used to have comprehensive responses a while ago.

I think they’re scaling down the models to make them cheaper to run?

[–] [email protected] 6 points 2 years ago

They’re definitely reducing model performance to speed up responses. ChatGPT was at its best when it took forever to write out a response. Lately I’ve noticed that ChatGPT will quickly forget information you just told it, ignore requests, hallucinate randomly, and has a myriad of other problems I didn’t have when the GPT-4 model was released.

[–] [email protected] 4 points 2 years ago

I wouldn't be surprised if it is getting worse. It's not "real" intelligence that "understands" your questions, and unlike more targeted solutions like GitHub copilot they don't have a strong use-case focus that can guide their progress.

But I think it's also that people are coming to terms with what ChatGPT actually can and more importantly cannot do. It's crazy sometimes to hear what the average person thinks the current iteration of AI's is capable of.

[–] [email protected] 3 points 2 years ago

Yeah I just cancelled my plus sub because it's not valuable to me anymore. It feels nearly as bad as 3.5 at times, and having to go back and forth with it on a 25 message per 3 hour budget is extremely stupid.

[–] [email protected] 1 points 2 years ago (2 children)
[–] [email protected] 2 points 2 years ago (1 children)

It's good at writing sentences. The content of the sentences may or may not be real/true.

[–] [email protected] 1 points 2 years ago

I'm just not impressed with guessing which words to use next. Especially when I have to verify what it produces.

[–] [email protected] 1 points 2 years ago

A month ago, as a man working in IT and graphic design I was getting the "ai is going to replace you" every day and I'm loving the ai decline headlines this week.