this post was submitted on 18 Jun 2025
79 points (98.8% liked)

Fuck AI

3139 readers
1260 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

My work thinks theyre being forward thinking by shoving ai into everything. Not that we are forced to use it but we are encouraged to. Outside of using it to convert a screenshot to text (not even ai...thats just ocr) I haven't had much use for it since it's wrong a lot. Its pretty useless for the type of one off work I do as well. We are supposed to share any "wins" we've had but I'd sooner they stop paying a huge subscription to Sammy A.

top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 9 hours ago

Mine, but they invested in an agent instead of just an LLM chat. I’m not interested in the chat based ones, I don’t think they provide much value, but Claude Code is genuinely a beast. My jaw drops every time I use it and it just does things for me, with a clear task list that it updates and confirmation/edit prompt when editing files or running bash commands. I can delegate all my boring or precisely defined tasks to it and focus instead on coding and finding solutions to interesting problems.

For example, last week I found a function parameter that was not in use. I usually go through git history to check when/why it was removed, but it’s not something I do often and going on command hunting every time is just not exciting. I gave it following prompt

“in file x, function a, there is parameter b which is no longer used. Go through git history and find the commit when it was deleted from the function body

and I was able to have a two minute break while it executed git and grep commands. Yes it is a small thing, but then there are many small things like that throughout the day and not having to do them saves a lot of mental capacity.

[–] [email protected] 2 points 7 hours ago

We have a company-wide SMART goal to “utilize AI in our daily work”. Not sure how that can be proven, but whatevs. GitHub Copilot cost me an hour just yesterday, where it put in a semicolon that doesn’t belong and was very tough to track down. It’s nice when it works, but it will cost you lots of time when it doesn’t. I guess I fulfilled my SMART goal?

[–] [email protected] 2 points 10 hours ago

On one occasion I ran into a problem that took me weeks to solve. I had exhausted all kinds of support already, so people (including my boss) suggested to try ChatGPT. I had a difficult time to explain to them that this simply would be no help.

In the end I fixed this myself with reading thousands of helpdesk articles and at one point drawing the right conclusion, but it was so dependent on local variables that an LLM would not have been able to find a solution here.

[–] [email protected] 5 points 18 hours ago

My company recently announced to the whole IT department that they're contracting with Google to get Gemini for writing code and stuff. They had someone from Google even give a presentation rife with all kinds of propaganda about how much Gemini will "help" us write code. Demoed the IntelliJ integration and everything. I wouldn't say we were "asked" to use it, but we were definitely "encouraged" to." But since then, there's been no information on how actually to use our company-provided Gemini license/integration/whatever. So I don't think anyone's using it yet.

I'd love to tell everyone on my team not to use it, and I am kindof "in charge" of my team a bit. But it's not like there aren't many (too many) levels of management above me. And it's clear they wouldn't have my back if I put my foot down about that. So I've told my team not to commit any code unless they understand it as well as they would had they written it themselves. I figure that's sufficiently noncommittal that the pro-Gemini upper management won't have a problem with it while also (assuming anyone on my team heeds it) minimizing the damage.

[–] [email protected] 31 points 1 day ago* (last edited 1 day ago) (1 children)

My job has been paying hundreds of euros per person for access to microsoft's llm. And it really is just a toy at this point. It's been helpful to me a handful of times over the last year. A better investment would be a non-shitty version of google

[–] [email protected] 16 points 1 day ago (1 children)

No fucking kidding. I hate how bad internet search has become. Time to pay for kagi I guess..back in my day you could actually search the internet for actual content and not slop and ads.

[–] [email protected] 3 points 1 day ago

I have Kagi, it's worth it. They're stretching themselves a bit thin tho

[–] [email protected] 22 points 1 day ago (3 children)

My company has adopted llms in a big way and has set per-employee "time savings" goals.

[–] [email protected] 20 points 1 day ago

Sorry, I threw up a little at that

I can't wait until it all fails miserably honestly. But maybe it wont, we will see.

[–] [email protected] 15 points 1 day ago

That’s not just a red flag, it’s a bunch. And they’re being waved by a marching band. I’d start looking for new work if I were you, because your company’s leadership is dangerously incompetent.

AI is a “time saver” in only very narrow circumstances and only in the hands of certain people who are already at the top of their fields. You need to be able to immediately recognize when it’s wrong, because if you don’t, it costs time and, worse, introdruces major flaws into work.

[–] [email protected] 9 points 1 day ago
[–] [email protected] 20 points 1 day ago

My boss tried to get us to use the commercial ChatGPT account he got for the business, but within two months everybody stopped using it.

He didn't try to mandate it, but he did encourage us to try it. In the end it was just more work than doing what we needed to do ourselves without its "assistance".

[–] [email protected] 7 points 1 day ago (1 children)

Yep. It's level 0 tech support, HR, etc. It's about 50% successful. Then when it fails, it connects you to a person.

[–] [email protected] 1 points 5 hours ago

I wouldn't be so opposed to it if this was the case with Copilot, but at my job it never "fails". It never says, "I don't have enough data on that," or, "You should contact an appropriate resource." It always has an answer that is very confidently portrayed.

Now I'm flooded with tickets from users saying, "I followed Copilot's instructions and this still didn't work," with screenshots of Copilot where they asked it how to do something that is impossible with our software. Then I have to argue with them about it because they believe the LLM over IT. Or users asking for permission to see a button/link that doesn't exist because the last 50% of the steps are pure hallucinations.

[–] [email protected] 11 points 1 day ago

My boss got us a ChatGPT corporate workspace or something? They gave me access to which I promptly "forgot" (read: intentionally did not use). Then the other day, I asked my boss for help with a tricky piece of coding and they screenshared while they showed themselves directly typing my question into ChatGPT and sending me its answer...

I tried using it once for a different piece of tricky coding. I tried arguing with it, having to tell it multiple times that it was making the same error over and over with each new "totally corrected" answer it gave me. I gave up after an hour without any new steps in a correct direction.

[–] [email protected] 4 points 1 day ago

The company I work for have their own LLM AI. ChatGPT, Deepseek, Claude and others are all blocked. Our own works ok, and is there to anyone who wants to use it, nobody is forced to use it.

[–] [email protected] 7 points 1 day ago

I'm on our copilot demo team so I assist some business folks utilize it in teams or office a little. mostly I design controls around limiting its impact but all users have access to build agents by default which is dumb. so yeah, not my full time gig by any means but extracting "value" is subjective and it takes time to build some use cases.

[–] [email protected] 6 points 1 day ago

My company is pushing it. The funny thing is that it's totally inappropriate for my job. Plus, I find it amusing that they're complaining about the lack of profit (and cutting out Zoom for GMeet for cost, and not using O365 and expecting that we can switch to GSheets) and then talking about the AI in the same breath.

[–] [email protected] 1 points 19 hours ago

Haven't been asked yet, but company is in process of adding "AI" (as ambiguous as that) into many business processes. (Presented right after presentation on sustainability...)

I know many developers love LLMs, but they seem so useless to me - LLM is not gonna fix tech debt, wonky git issues or know how to query a complex 20+ year old DB. I have access to a LLM and would not know what to ask that I could not do myself.

[–] [email protected] 3 points 1 day ago

Not my department specifically, but my organization has. And I work for a county government, so... be worried about that going forward, I'd say.

[–] [email protected] 2 points 1 day ago

As a programmer, I got some pretty good use cases. GitHub Copilot has been hit or miss. It can do tedious, common coding well, but the less common it is, the more trouble it has.

[–] [email protected] 2 points 1 day ago

Encouraged to use it, had a vibe coding hackathon, released a new product six months ago built on a ChatGPT integration