Elephant0991

joined 2 years ago
[–] [email protected] 7 points 2 years ago

You can be lying inside this line just like that guy!

[–] [email protected] 17 points 2 years ago* (last edited 2 years ago)

Dead boner in Aisle 6, in Aisle 6!

[–] [email protected] 3 points 2 years ago

Yeah, some source say that the raised examples have been fixed by the different LLMs since exposure. The problem is algorithmic, so if you can follow the research, you may be able to come up with other strings that cause a problem.

[–] [email protected] 3 points 2 years ago

If he was human, I'd say he would get a neck pain.

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago)

I still interact with one irreplaceable community. If there isn't enough subscribed content on Lemmy, I do go back and look at my feed. Most of my interactions are here, though.

[–] [email protected] 1 points 2 years ago (1 children)

There did seem to be a controversy in March about whether or not the word should go.

[–] [email protected] 4 points 2 years ago

Haha, if you quickly skipped the "and people" part. Happen all the time. Brain cycles are expensive.

[–] [email protected] 6 points 2 years ago (1 children)

Those seem like questions for more research.

I bet it's more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don't offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it's probably not a straight-forward copy, why not just accept?

[–] [email protected] 7 points 2 years ago* (last edited 2 years ago)

I am being brainwashed by AI!

Here's the paper: https://dl.acm.org/doi/10.1145/3544548.3581196

Abstract

If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.

[–] [email protected] 6 points 2 years ago

OK. Then. I guess the summary would be like, the asteroid was more loose than we though, and we had no idea how the boulders got ejected from the surface because our impact.

[–] [email protected] 25 points 2 years ago (2 children)

Somehow, I found the lead scientist's statement and the associated news to be click-baiting. Right, you crash something into a composite rock, and expect no ejecta from it. That's pretty freaking believable. That's like, the most basic physics you can expect from it. This is just to grab your attention so we can get more funding (which they may deserve, even if this is irritating), folks.

view more: ‹ prev next ›