Apparently linkedin's cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
Zack of SMBC has thoughts on it:
[actual excerpt omitted, follow the link to read it]
Apparently linkedin's cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
Zack of SMBC has thoughts on it:
[actual excerpt omitted, follow the link to read it]
We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).
Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.
Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.
Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”
Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.
At this point it's an even bet that they are doing this because copilot has groomed the executives into thinking it can't do wrong.
LLMs are bad even at converting news articles to smaller news articles faithfully, so I'm assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.
Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.
Big deal, we'll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.
Although I'd guess human level problem solving needn't imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.
eeeeeh
They'd just have Garisson join the zizians and call it a day.