lily33

joined 2 years ago
[–] [email protected] 6 points 2 years ago* (last edited 2 years ago) (1 children)

Why don't you go to https://huggingface.co/chat/ and actually try to get the llama-2 model to generate a sentence with the n-word?

[–] [email protected] 11 points 2 years ago (3 children)

Then you'd get things like "Black is a pejorative word used to refer to black people"

[–] [email protected] 2 points 2 years ago

Looking this up, I don't think it's actually group related. Instead, xfce seems to use polkit policies to allow the user to execute this file. Search results for "xfpm-power-backlight-helper authentication needed" seem to suggest improper installation.

If you've installed it through nix, I'd try older version, or stable if you're on unstable, etc. Though I myself don't sue xfce, so I don't actually know.

[–] [email protected] 2 points 2 years ago (2 children)

Which groups are you in already?

[–] [email protected] 1 points 2 years ago

Some regulation proposals seem fine to me, like the proposed EU AI act.

But for some of the problems the article lists, like defamation or porn generation, you just can't prevent if you have free and open models out there. You can make these things harder - and people already work on that - but if I have a free and open model, I can also change it (and remove restrictions).

The only way to stop those uses would be to keep AI tightly controlled in a walled garden. In capitalism, those walled gardens will belong to companies.

[–] [email protected] 10 points 2 years ago* (last edited 2 years ago) (4 children)

I don't know, I'm much more concerned about the possibility that we develop huge automation capabilities that end up being controlled by very few people.

As for the specific issues in the article - yes, they're real problems. But every advance in communication and information technology makes it easier to surveil or defame, and can be used for bad policing.

Right now there's a push to regulate the internet to "prevent CSAM" by blocking encryption, and I'm afraid a push to regulate AI will not get better results.

Sure, we can ban predictive policing and demands some amounts of transparency (and the EU already wants to do that). But if we try to go further and impose restrictions on the AI models themselves, this will most likely solidify that AI is controlled by few powerful corporations. After all, highly regulated models by definition can't be free and open.

[–] [email protected] 4 points 2 years ago

I've just read the abstract of the study - but it doesn't seem to be about people mindlessly copying the AI and producing biased text as a result. Rather, it's about people seeing the points the AI makes, thinking "Good point!" and adjusting their own opinion accordingly.

So it looks to me like it's just the effect of where done view points get more exposure.

[–] [email protected] 21 points 2 years ago (3 children)

Is that effect any different than the one you'd get if you have biased references, or biased search results, when doing the researchb for your writing?

[–] [email protected] 2 points 2 years ago

Thanks for the clarification.

[–] [email protected] 11 points 2 years ago* (last edited 2 years ago) (7 children)

AFAIK lemm.ee fits all your requirements, what don't you like about it?

~~Edit: Maybe I'm wrong. It's run by Estonians, but looking up its IP, it point to the US, so maybe it's not hosted in the EU.~~

[–] [email protected] 1 points 2 years ago

Firefox + ublock (it has filters that block the "install app" on mobile, but need to be enabled from the settings) is useable.

[–] [email protected] 8 points 2 years ago* (last edited 2 years ago)

Actually, a lack of license doesn't mean, "You're free to do whatever you want". Itt means "I retain full copyright and don't give anybody any permissions".

view more: ‹ prev next ›