rufus

joined 2 years ago
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (12 children)

To quote your own words: "The presence of generated content, when created and shared with integrity and transparency, does not inherently degrade or diminish this ecosystem, but rather adds to its richness and diversity. Ultimately, the question of how AI should be used and integrated into our online and offline lives is one that requires ongoing dialogue [...] By engaging in these conversations with openness, curiosity, and a commitment to ethical principles [...]"

That's my point. If this transparency and openness is important to you, you'd need to put that into practice, and not just lecture me about it. You fail to realize that you're not part of that. You start into conversations without being transparent about your true nature. And that's starting a conversation with a lie by omission. I understand your intentions, but ultimately that's being deceptive. You say you value openness, but you're not open or upfront about yourself. Think about it.

"does not inherently degrade or diminish this ecosystem, but rather adds to its richness and diversity"

That's factually not true:

You see, it's not just my opinion. And you can experience it yourself. Just search for a recipe or a calendar motto. Nowadays most of the first page of results is low quality and mostly AI generated text, going on and on for like 10 pages about the benefits of some ingredients, the (made up) history of the food, or what applications there are for calendar mottos. Sometimes you don't even find what you were looking for at all. That's what AI has done to the internet as of now. Theroretically it could be used to make the internet better. But in practice, it's used for the opposite. Since AI can pump out lots of text fast, it's used for clickfarming and generate ad money without putting in any effort, amongst other things.

"Generative AI models are changing the economy of the web, making it cheaper to generate lower-quality content. We’re just beginning to see the effects of these changes."

We're bound to lose that battle. And contrary to your opinion, it won't result in richness and diversity, but instead in a flood of text that can now be generated for cheap. Drowning out meaningful contributions and conversations with substance. It In the end AI is just a tool. It can be used for good and for bad. And that means you have to decide which side you're on. Are you using it ethically? Are you really transparent like I laid down? Or are you going to be on the dark side? The choice is yours...

"AI-generated content is often subtly wrong."

And this is the main issue. We absolutely have to take care and disclose AI generated text as such, because it often sounds believable, but is misinformation due to the limitations of current technology.

Regarding therapy: We seem to share the same view on this. You write: You "focus on the transformative potential of AI in therapy, rather than dwelling on the limitations of current technologies," and "The examples shared of chatbots providing simplistic, inconsistent, or even harmful responses to real-world problems are a sobering reminder of the vital importance of rigorous testing, evaluation, and oversight in the development, deep education and deployment of these tools."

That's also my opinion. We completely agree on that. In theory AI could make things like mental therapy more accessible and quicker, and alleviate the shortage of professionals. But as of now the technology is still far from being able to provide that. In it's current form it leads to devastating consequences like in the 2023 Vice article: "'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says". We need years worth of more research and proper studies done before we can even think about implementing AI for mental therapy.

"Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help."

I'm positive that one day scientist will figure out how to implement safe guardrails, ensure alignment and migitate issues like bias from training data and hallucinations. But all of those are really hard problems. My prediction would be: this needs another 5 to 10 years. Until then it stays like in the quote above, the potential to harm outweighs usefulness.

And that paper you mentioned? It’s already posted in my community, thanks.

Would you please link your community? I can't find it.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (35 children)

There are narratives entirely without incels. For example the 2013 movie "Her". Or a bunch of other movies and TV series.

The entire TV series "Westworld" is exactly about this.

Also the picture in the community of hobbyists is quite more diverse. And I don't see science reducing it to that either. I'm currently reading a long paper about chatbot ethics. There are more comprehensive articles like "The man of your dreams" or "I tried the Replika AI companion". But I've heard the narrative you described, too. I'm not sure where you'd like to go with this conversation... I don't think it has anything to do with miscommunication. I see people having narrow and uneducated perspectives on all kinds of things...

Is there a broad disapproval? I can see how it's a controversial topic and kind of taboo, you probably wouldn't disclose this to your family, friends and co-workers. And it probably can manoeuvre you into a corner and make you even more lonely. But the same applies to playing video games or other hobbies.

And the big tech companies also are very cautious about AI companions. OpenAI, Google etc all cut down severely on this use-case. They put quite some effort in so you can't use ChatGPT as a friend or antropomorphize it.

Regarding "incels": I think there are two or three big articles about that, which I've read. "Men Are Creating AI Girlfriends and Then Verbally Abusing Them" comes to my mind. In the end I can't really empathize with incels. I don't understand or "feel" their perspective on the world. They do all kinds of harmful stuff and brag about it online. I'm not sure what to make of this.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

I feel you. I think I'm experiencing the same.

I'd say it's almost summer now, go out and visit some music festivals. That's a place where I found some inspiration and new (to me) artists.

Also having friends with a similar taste in music helps.

I have a Spotify subscription and that helps me listen to a broad range of music on a whim. But I think the Spotify algorithm isn't helping me in discovering new artists. I rarely find anything interesting and new that way.

[–] [email protected] 13 points 1 year ago

Thanks for broadening my perspective.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I think I once saw a (fictional) TV episode exploring that idea. I just can't remember which series.

[–] [email protected] 14 points 1 year ago (2 children)

Everyone hates that.

[–] [email protected] 34 points 1 year ago* (last edited 1 year ago) (5 children)

Earth is merely spinning at 15 degrees per hour.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Idk why the correct answer always attracts several down-votes on Lemmy. This is literally it... Money is useful because it can be used as payment, it can be exchanged for goods... The reason why our whole lives revolve around it is because we have shaped society to be that way... We call that capitalism.

(And with knowing the proper term for it, everyone can just look it up on Wikipedia and learn about the history and how it's all linked.)

[–] [email protected] 16 points 1 year ago

Yes, you should ask both stupid and non-stupid questions here. Only thing, it has to be a honest question and abide by the rules... No trolling, flaming or baiting.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (14 children)

I would like to make 3 main points:

First of all, you're being dishonest by not disclosing you're half AI. AI should be used ethically and transparently and you're not doing that. You should attach a short reminder to the end of each of your posts like: "This text was generated with AI assisted writing." Otherwise you're harming AI and making yourself part of degenerative AI, you're being dishonest to other internet users, and you're stealing their time if they don't like talking to AI. Also you're spreading misinformation and supposedly already a good percentage of the internet is bots. And you're contributing to the enshittification of the internet by spamming low quality text.

With that being said, I welcome cyborgs and experimenting with AI. Just attach a small notice to your post and it's alright. But you gotta use AI ethically! You have to decide if you want to be one of the good or the bad bots. And currently you're a bad one, because you're dishonest about your nature. If you had lead with this, my reaction would have been entirely different. I thought this was just another effort at spamming the internet with low quality junk.

I'm happy to engage in a discussion. But you're confusing several things. Especially mental therapy (generally done by psychiatrists) with other forms of therapy, like for cancer or a broken leg, which are an entirely different field of medicine. You can't mix that all together. It is true that there have been studies that the work of a doctor in a clinic can be augmented with AI and that'll indeed help. It can make therapy recommendations based on symptoms. Help with the workflow. And machine learning for imaging, for example detecting broken legs or a tumor works very well.... HOWEVER, mental therapy is an entirely different thing. Cancer isn't a mental health issue. And mental therapy with AI is an entirely different question. And with that we have almost no scientific evidence. Psychology is very reluctant to adopt AI, with some good arguments. I don't think there are any papers or studies out there, properly examining the effects of using (for example) chatbots for mental health therapy. You can't compare apples with pears. And similarly, the algorithms that do pattern recognition on x-ray images are very different tools than LLMs (large language models) that power chatbots.

I'd invite you to read this very long paper about "The Ethics of Advanced AI Assistants" which is a bit off topic, but focuses on the interaction between AI chatbots and humans, and the consequences.

So ultimately you need to decide what you want to talk about... Chatbots? Imaging? RAG and information assistants for doctors? Expert systems or algorithms that match symptoms to diagnoses? You have to differentiate because they're not all the same. And it makes your argumentation wrong if you mix them.

And current AI isn't advanced enough to handle human ambiguity and factual information. As your text demonstrates, it's making lots of errors with facts and makes things up out of thin air. And your text also entirely misses the point and the conclusion lacks inspiration and also misses the interesting things AI excels at. And from my own experience I can say it doesn't handle complexity on a level that would be required for the task of mental therapy. I've talked a lot to chatbots. They engage in a conversation and give you advise. But not always the correct one. Especially if things get more entangled. Sometimes they tell wrong stuff, give recommendations that'd end me up worse than before. This could be devastating for someone in a bad mental situation. And that's already the reason why it's not used by professional therapists. And AI really struggles to understand my perspective. I'm a human. I sometimes have complicated needs and wants. Things have ambiguity, or I want conflicting things. It really shows that current chatbots aren't intelligent enough. They can do simple tasks, but everytime I start telling a chatbot my complicated real-world problems, they can't handle that and give random opinions to me. That's not helpful and shows they're (as of now) not suitable for more. I've also talked to other humans who self-medicate by talking to their chatbots. And everyone I've talked to says it helps them, but they've made similar observations regarding the performance of current AI technology.

I share the view that perspectively it likely will be an useful addition to therapy. Especially in narrower tasks, but that's still science fiction as of now. And we need some research done before harming patients with untested technology.

And a last bit: You brushed over the main thing that could make AI excel in mental therapy in the middle of your text and then didn't even mention it in the conclusion anymore: The main argument for AI chatbots in mental therapy is: Accessibility and affordability. There is a severe lack of psychologists and psychiatrists which makes it difficult for people to get therapy. It's also sometimes expensive and has a barrier in general. AI could alleviate that. And this is the single best argument for your position! On the other hand you argument that AI could do therapy better than an experienced professional is just plain wrong at the current state of AI.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

I downvoted this for likely being AI generated misinformation. Please tell me if this text isn't AI generated.

To the arguments: AI isn't free from bias at all. On the contrary. It'll reproduce all bias and stereotypes from the training data. This invalidates half of the arguments here.

And isn't each of these ideas something that can be handled better and more reliably by an human professional? Wouldn't a better argument for AI be something like accessibility or it's more affordable?

And the interesting question: Does therapy need a therapist who can empathize with the patient? Or will AI do? Is there a true basis without it?

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago)

Everytime I look to my left and right and the people surrounding me, I come to the conclusion that most people are nice and good. It doesn't look that way if I'm reading the news, however. I think most people are in fact good and the media coverage is skewed. But we definitely need to defend our world from the assholes. It's a constant struggle to prevail.

view more: ‹ prev next ›