let me guess 3% are the corporate heads, c-suites, mba, and the people either implementing it or deploying it.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
The management regrets to inform the TechTakes/awful.systems community that this post has apparently escaped containment. In order to continue providing the environment that this community deserves, we will be distributing free tickets to the egress in response to comments that exhaust our patience.
Mods when a post escapes containment: No! No!!
Sickos like me when a posts escapes containment and they get to see the worst takes humanity has to offer: Yes... Ha ha ha... YES!
This shit is not Artifical Intellegence. It's an internet scrabbing software that understands your input then searches and summerizes the answer back to you in your language....AND so many times it makes mistakes while trying to even do that. 0 intellegence, 0 creativety, 0 feelings/empathy/sympathy , 0 everythign. In programming, it's like a computer-science intern on methamphadmines. he's searching stackoverflow and githubs repos for any question you have, but again he will never come up with a new geniuos unseen before scripts of programming and he may make mistakes.
Also, it brainrotted the skill of learning itself to kids and killed our interactions and creativity
I mostly have this gemini assitant because google esentially added it for me. Of course i tried a bit of gpt. My advice is that, if they're good there's a chance that they many not be anymore in the future. Or not how you expect them to be. We have to make it good too, but right now the world is hooked with AI.
I have seen to much ai spam to care for ai images, there is this youtube series with ai assisted animations (monoverse, neural viz), that is the only good use of ai i ever seen so far in media creation. But, other than that, it's getting distopian out there.
Lol, I don't blame them.
These machines aren't good. I'm a curious person, I check things out. They're fascinating toys, it's amazing to see a computer do such a convincing mimicry of speech. However I've tried using them for the things the companies spruke and they suck.
Talking to them is more lonely than just voicing my thoughts to the birds in the garden; there's no person there just sycophantic affirmation. I have disasterously low self esteem and even I'm beyond that.
You can't learn about topics because they will bullshit in exactly the same authoritative tone they'll dispense incredible insight. You hire teachers not for the facts, but the effort spent organising and filtering them. A textbook can easily be pirated in most fields you're interested in (fuck academic publishing lmao) and is a much better use for your time. If you're in a hurry just be wrong about something on stack exchange or a subreddit :p Honestly even searching for random badly edited youtube videos usually yields more reliable info if you're not very literate.
The code they make is awful, like critical mistakes everywhere. Exactly what you'd expect from a sort of copy-pasting code from GitHub and stackoverflow until it works approach. Yeah we joke about how programmers actually just Google things but no, if you're doing your job right serious effort goes into planning approaches and making sure you're across relevant specs and styles for building maintainable code. Like I made a dohicky for screen rotating using one of these llms and if I didn't know about how linux polls sensors it would have kept the sensor constantly reading which would have mangled the battery life.
Summaries are a crapshoot, more like what students do to avoid plagarism pings. Clumsy rewordings, random waffle, missing the key points. More rephrasing than summary and then dropping random sections to hit the word limit.
you might say skill issue, sure whatever I'm sure you can become a better operator but no amount of "prompt engineering" will solve the fundamental reliability problem. If you have to be capable of and willing to duplicate the work to verify everything what use is this? We're not solving cryptography problems where testing is low effort. If I want to check a bunch of factual claims I need to do an entire lit review to ensure nothing important was missed...