There are some numbers in this blog post https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the-tech-industry-2/ (and a couple of others on the same blog) and they really don't look like OpenAI is going to last a couple of years until profitability.
“Talking chimps destroy own planet, and themselves”
More like "All chimps listens to the most charismatic chimps even though they are narcissistic idiots and that leads to their destruction"
The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.
As a standalone thing, LLMs are awesome.
They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.
The difference between AI companies and most other tech companies is that AI companies have significant expenses that scale with the number of customers.
You think someone just got confused between the ugly guy, the tower and the top position and that is how we ended up in this mess?
It goes so far that a lot of the very same people vilifying open relationships are the ones cheating on their partners.
On the other hand that is also one of those things that annoys me about romance culture, the whole notion of your girlfriend/boyfriend/wife/husband being "stolen" by someone else as if your partner was just a passive object instead of being the actual person in the cheating who made promises to you (which might or might not include sexual exclusivity depending on mutually agreed upon preferences between everyone in the relationship) and should keep those promises or break up with you no matter what any third person tempts them with.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people's (or worse, AI's) code is orders of magnitude harder than writing the same code yourself.
Probably not going to go belly-up, in a while
Don't be so sure about that, the numbers look incredibly bad for them in terms of money burned per actual revenue, never mind profit. They can't even pay for the inference alone (never mind training, staff, rent,...) from the subscriptions.
Yeah, I wonder how LAPD can be unable to handle ten groups of 1000 people in a metropolitan area of almost 20 million people, I would assume ten groups of 1000 people is more like a quiet weekend.
Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.