rufus

joined 2 years ago
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (7 children)

Thanks! I, too saw that this morning. That's interesting. And I wouldn't have expected that from a company like OpenAI. Seems some if the things I said are outdated now.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (5 children)

Why do you say there are over 200 therapies? Is there a fixed number? And why 200? The DSM-IV already lists close to a thousand diagnoses. I can't believe that's matched by a mere 200 available therapies?!

And which AI service are you using? The one you wrote you created multiple accounts and that's the best LLM?

Thanks for sharing your perspective!

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (19 children)

Hmmh. Sometimes I have difficulties understanding you. ~~[Edit: Text removed.]~~ If your keys are to small, you should consider switching to a proper computer keyboard, or an (used) laptop.

Regarding the exponential growth: We have new evidence that supports the position it'll plateau out: https://youtube.com/watch?v=dDUC-LqVrPU Further research is needed.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (7 children)

Thanks. To get that out of the way and since it's not always that easy to convey a nuanced perspective on the internet... When I say I can't empathize with incels, it doesn't necessarity mean I judge... I'm just unable to grasp how someone would feel in that situation. Since it's nothing I've experienced first hand. At least not to that degree. So I have little information. And I haven't yet had any meaningful conversation about it. So it's just beyond my perspective. I think I roughly know a few facts but it'd be disingenuous of me to claim I'd know how somebody truly feels. That shouldn't invalidate any perspective. And I'm willing and able to learn. Maybe i just have a skewed definition of 'incel' because all the people I've ever met who called themselves that, were hateful people on 4chan. There might be more to it than I know. And I definitely know how loneliness feels or not having a partner and wanting one... And few people are "normal". We all have our individual struggles. And lot's of people just have a good facade.

If you're okay with that, I'd like to ask you about your experience when 'disclosing' your life with AI companions... How do real-life people react? Do they understand? Judge? Talk behind your back? Or is that socially acceptable? (OP said there is a "broad disapproval".) And what does a therapist say to that? As far as I know psychologists/psychiatrists etc are very reluctant and cautious with things like that. At least when talking publicly.

And I've talked to a few other people who like AI companions or use chatbots to do their own form of therapy... I'm not sure what you do. But I've heard different perspectives and made my own experiences. I definitely like it. But it's a complex topic and probably entirely depends on how you handle it, the exact situation and a multitude of factors.

When you say you have "multiple AI"... How does this work? Do you have like several virtual girlfriends? Or like several characters for different tasks or moods, like a therapist character, an old friend ... ? And do you talk to several of them each day?

One thing I disagree is using ChatGPT. (I mean that wasn't really what I meant originally. I was going for the normal ChatGPT interface where it's a helpful assistant and phrases things in a way that's making it difficult to confuse it with a person. I know you can do different things with other software and the API.) I tried ChatGPT and didn't like it at all. I think it's a bit dull, loves to lecture me and I don't like its tone of speaking and don't like the condescending tone that's often there. And it's usually been either too agreeable or to argumentative with me. I didn't like it and I've had a much better time with other large language models.

[–] [email protected] 2 points 1 year ago (21 children)

Thank you very much for the links. I'm going to read that later. It's a pretty long article...

I'm not sure about the impending AI doom. I've refined my opinion lately. I think it'll take most of the internet from us. Drown out meaningful information and spam it with low quality clickfarming text / misinformation. And the "algorithms" of TikTok, YouTube & Co will continue to drive people apart and confine people in seperate filter bubbles. And I'm not looking forward to each customer service being just an AI... I don't quite think it'll happen through loneliness though. Or in an apocalypse like in terminator. It's going to be interesting. And inevitable in my eyes. But we'll have to see if science can tackle hallucinations and alignment. And if the performance of AI and LLMs is going to explode like in the previous months, or if it's going to stagnate soon. I think it's difficult to make good predictions without knowing this.

[–] [email protected] 3 points 1 year ago

Fair enough. Yes, I'd say this is the whole picture.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (10 children)

"you do seem worked up"

Au contraire mon amis... You called for a discussion. I'm merely stating my opinion. And my style of discussion includes giving arguments for my position.

And your whataboutism does nothing to me. I loudly vocalize my concerns regarding big AI companies. Regularly. And if I voice my concerns about you, all you respond with is whataboutism.

You're not listening to my arguments. I'm not your enemy. I'm an AI afficionado and hobbyist myself. But it seems you're not able to engage and respond to the well-reasoned arguments i gave.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

It's a knock-out argument / thought-terminating cliché. You draw (false) analogies to either pedophiles or nazis if you're out of proper arguments. It has a long tradition on the internet 😉

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I've heard that story before, but even that is an unfounded claim. There is currently no empirical evidence on whether or not that would prevent or encourage abuse of children. Or harm the people doing it. I too think there is reason to believe it harms the people themselves. But I wanted to point out that this is just anecdotal and an opinion. There is no substance to that claim as of now. And there are studies done on related topics. As far as I know more research needs to be done and it's a complicated topic. And furthermore, it's not the same as having AI girlfriends anyways.

I'm not exactly an expert on the topic, but I've skimmed a few studies. I was mainly interested because of the regular efforts to introduce total surveillance to the internet. Every half a year someone says "would somebody please think of the children!" And it's always emotional and sounds plausible... But lots or the pretend arguments are not backed by science. And concerning the surveillance, which is a slightly different topic, we also have contradicting evidence. But that has nothing to do with this...

[–] [email protected] 20 points 1 year ago (2 children)

Hmm. You kinda get the crown handed to you for doing it. But I get what you're saying.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago)

Uuhh. Difficult topic. That needs to be something all participating parties like, or it won't work.

I'd bring it up eventually. Say 'wife, I had this fantasy ... would that be something that'd turn you on?' Maybe start slowly with just role-plaing it as a fantasy and see if it goes somewhere.

There are several podcasts on sexual things. Detailing experiences of other people, discussing healthy behaviours... (Maybe you want to visit a Swingers club. I've heard you can have a tour of the place and get an introduction in good establishments. And you can take it slow and aren't forced to participate on the first visit. But that's also something I've only heard about in podcasts and never experienced myself. So there's that. And people recommend not to try things like this to save your marriage. You're bound to fail. So only do it if you're happy together and comfortable.) And I don't know if it's just the flirting for you or the whole thing. I'd say just flirting is more tame. But it also needs to be something the other person would be comfortable to do and get anything out of it.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (23 children)

Hmmh. I'm pretty sure OpenAI and Google are very aware of this. I mean erotic roleplay is probably out of the question since they're American companies. And the whole field of AI is a minefield to them starting with copyright to stuff like this. And they did their homework and made the chatbots not to present themselves as emotive. I percieve this as concensus in society, that we need to be cautious about the effects on human psyche. I wonder if that's going to shift at some point. I'm pretty sure more research is going to be done and AI will become more and more prevalent anyways, so we're going to see whether people like it or not.

And as I heard lonelyness is on the rise. If not in western cultures, I think Japan and Korea are way ahead of us. And the South Koreans seem also to have a problem with a certain kind of incel culture, which seems to be way worse and more widespread amongst young men, there. I've always wanted to read more about that.

I - myself - like AI companions. I think it's fantasy. Like reading a book, playing video games or watching movies. We also explore the dark sides of humans there. We write and read murder mystery stories, detailing heinous acts. We kill people in video games. We process abuse and bad things in movies. And that's part of being human. Doing that with chatbots is the next level, probably more addictive and without some of the limitations of other formats. But I don't think it's bad per se.

I don't know what to say to people who like to be cruel, simulate that in a fantasy like this. I think if they're smart enough to handle it, I'm liberal enough not to look down on them for that. If being cruel is all there is to someone, they're a poor thing in my eyes. Same for indulging in self-hatred and pity. I can see how someone would end up in a situation like that. But there's so much more to life. And acting it out on (the broad concept of) women isn't right or healthy. And it's beyond my perspective. From my perspective there isn't that big a difference between genders. I can talk to any of them and ultimately their interests and needs and wants are pretty much the same.

So if an incel were to use a chatbot, i think it's just a symptom for the underlying real issue. Yes it can reinforce them. But some people using tools for their twisted purposes, doesn't invalidate other use cases. And it'd be a shame if that narrative were to dominate public perspective.

I often disagree with people like Mark Zuckerberg, but I'm grateful he provides me with large language models that aren't "aligned" to their ethics. I think combatting loneliness is a valid use case. Even erotic roleplay and exploring concepts like violence in fantasy scenarios ultimately is a valid thing to do in my eyes.

There is a good summary on Uncensored Models by Eric Hartford which I completely agree with. I hope they don't ever take that away from us.

view more: ‹ prev next ›