ShakingMyHead

joined 11 months ago
[–] [email protected] 4 points 9 months ago* (last edited 9 months ago)

I believe the future is going to be so bright that no one can do it justice by trying to write about it now.

Uh

[–] [email protected] 6 points 9 months ago

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

This one?

[–] [email protected] 8 points 9 months ago

We could also just fluoridate the water supply, which also massively reduces cavities.

[–] [email protected] 11 points 9 months ago (2 children)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.

Sure, but that doesn't change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago) (6 children)

Are you saying that Clippy is proof I'm right or proof I'm wrong? Or I'm I just being unfunny and not getting the joke.

[–] [email protected] 14 points 9 months ago (8 children)

Microsoft is making laptops with dedicated Copilot buttons.

I think they'd rather burn their company to the ground, all the while telling their customers that they just needed to wait a little while longer, rather than admit that they got it wrong.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago)
[–] [email protected] 5 points 9 months ago

Who is even asking for this?

[–] [email protected] 4 points 10 months ago (1 children)

https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047

Have a AI regulation committee and also give the committee their own hardware so that they can use that hardware to regulate the other hardware. Maybe.

view more: ‹ prev next ›