People are increasingly turning to LLMs for a wide range of cognitive tasks, from creative writing and language translation to problem-solving and decision-making.
If this guy's circle of acquaintances includes an increasing number of people who rely on fancy autocomplete for decision-making and creative writing, I might have an idea why he thinks LLMs are super intelligent in comparison.
To achieve human escape velocity, we might need to leverage the very technologies that challenge our place in the cognitive hierarchy. By integrating AI tools into our educational systems, creative processes, and decision-making frameworks, we can amplify our natural abilities, expand our perspectives, and accelerate innovation in a way that is symbiotic rather than competitive.
Wait, let me get this straight. His solution to achieve human escape velocity, which means "outpac[ing] AI's influence and maintain human autonomy" (his words, not mine) is to increase AI's influence and remove human autonomy?
This is the next level of “I put my symptoms into Google and WebMD told me I have cancer”.
My compassion goes out to any doctors who now not only have to explain to several idiots every day that a slight pain in their pinky finger does not, in fact, mean they probably have ball cancer, but also that some vaguely professional sounding fluff disguised as a diagnosis generated by a chatbot also doesn’t mean they probably have ball cancer.