zogwarg

joined 2 years ago
[–] [email protected] 6 points 3 weeks ago

Funnily enough it isn't even required by their purported bayesian doctrine (which proves none of them do the math), you could simply "update forward" again based on the new evidence that the text is part-fictional.

[–] [email protected] 17 points 3 weeks ago

Counter-theory: The now completely irrelevant search results and the idiotic summaries, are a one-two punch combo, that plunges the user in despair, and makes them close the browser out of disgust.

[–] [email protected] 10 points 3 weeks ago

Subjectively speaking:

  1. Pre-LLM summaries were for the most part actually short.
  2. They were more directly lifted from human written sources, I vaguely remember lawsuits or the threat of lawsuits by newspapers over google infoboxes and copyright infringement in pre-2019 days, but i couldn't find anything very conclusive with a quick search.
  3. They didn't have the sycophantic—hey look at me I'm a genius—overly-(and wrong)-detailed tone that the current batch has.
[–] [email protected] 10 points 3 weeks ago

This is obviously a math olympiad gold medal performance, Fields medal worthy even!

[–] [email protected] 14 points 3 weeks ago (1 children)

It can't be that stupid, you haven't read the sequences hard enough.

[–] [email protected] 8 points 1 month ago (1 children)

I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the "intentional acts" and "shit happens" distinction, in a perverse Rationalist way. ^^

[–] [email protected] 6 points 1 month ago

But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.

[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (2 children)

Did you read any of what I wrote? I didn't say that human interactions can't be transactional, I quite clearly—at least I think—said that LLMs are not even transactional.


EDIT:

To clarify I and maybe put it in terms which are closer to your interpretation.

With humans: Indeed you should not have unrealistic expectations of workers in the service industry, but you should still treat them with human decency and respect. They are not their to fit your needs, they have their own self which matters. They are more than meets the eye.

With AI: While you should also not have unrealistic expectations of chatbots (which i would recommend avoiding using altogether really), it's where humans are more than meets the eye, chatbots are less. Inasmuch as you still choose to use them, by all means remain polite—for your own sake, rather than for the bot—There's nothing below the surface,

I don't personally believe that taking an overly transactional view of human interactions to be desirable or healthy, I think it's more useful to frame it as respecting other people's boundaries and recognizing when you might be a nuisance. (Or when to be a nuisance when there is enough at stake). Indeed, i think—not that this appears to the case for you—that being overly transactional could lead you to believe that affection can be bought, or that you can be owed affection.

And I especially don't think it healthy to essentially be saying: "have the same expectations of chatbots and service workers".


TLDR:

You should avoid catching feelings for service workers because they have their own world and wants, and it is being a nuisance to bring unsolicited advances, it's not just about protecting yourself, it's also about protecting them.

You should never catch feelings for a chatbot, because they don't have their own world or wants, it is cutting yourself from humanity to project feelings onto it, it is mostly about protecting yourself, although I would also argue society (by staying healthy).

[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (4 children)

Don't besmirch the oldest profession by making it akin to souless vacuum. It's not even a transaction! The AI gains nothing and gives nothing. It's alienation in it's purest form—no wonder the rent-seekers love it—It's the ugliest and least faithful mirror.

[–] [email protected] 10 points 1 month ago

✨The Vibe✨ is indeed getting increasingly depressing at work.

It's also killing my parents' freelance translation business, there is still money in live interpreting, and prestige stuff or highly technical accuracy very obviously matters stuff, but a lot of stuff is drying up.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago) (1 children)

Jinsatsu Zetsubō (人殺・絶望, but his thralls call him Ginny) was not your ordinary vampire goth demon lord... He delighted in his garments of true terror and dread, what better source of inescapable despair than his beige ulster coat, barely held together by off-yellow gold pins, with a salmon pink napkin in the over pocket, an ensemble designed to inspire trudgery sucking all soul and joy from any passerby...

[–] [email protected] 6 points 1 month ago (1 children)

A glorious snippet:

The movement ~~connected to~~ attracted the attention of the founder culture of Silicon Valley and ~~leading to many shared cultural shibboleths and obsessions, especially optimism about the ability~~ of intelligent capitalists and technocrats to create widespread prosperity.

At first I was confused at what kind of moron would try using shibboleth positively, but it turns it's just terribly misquoting a citation:

Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.

Also lol at insiting on "exonym" as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn't really even make sense to use the acronym unless you're doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)

It's a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.

view more: ‹ prev next ›