this post was submitted on 14 Apr 2025
220 points (92.6% liked)

Technology

72263 readers
2885 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

all 47 comments
sorted by: hot top controversial new old
[–] [email protected] 91 points 2 months ago (4 children)

It's just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.

Does anyone read more than the headline? OP even said this in the summary.

[–] cyrano 24 points 2 months ago (1 children)

I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.

[–] [email protected] 8 points 2 months ago (1 children)

If you've never used a custom LLM or wrapper for regular ol' ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it's trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?

[–] cyrano 7 points 2 months ago

I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.

[–] null_dot 7 points 2 months ago

It depends what purpose that paperwork is intended for.

If the regulatory paperwork it's managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.

Learning and understanding is hard work. An LLM can't do that for you.

Sure it can summarise instructions for you to show you what's more pertinent in a given instance, but is that the same as someone who knows what to do because they've been wading around in the logs and regs for the last decade?

It seems like, whether you're using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.

As always, there's a risk that a user just won't identify a problem in the information produced.

I don't think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.

[–] [email protected] 4 points 2 months ago (1 children)

NOOOOOO ITS DOING NUCLEAR PHYSICS!!!!!!!!111

[–] [email protected] 7 points 2 months ago (1 children)

It's eating the rods, it's eating the ions!

[–] [email protected] 2 points 2 months ago (1 children)
[–] [email protected] 3 points 2 months ago (1 children)

I unfortunately don't can someone explain?

[–] [email protected] 2 points 2 months ago (1 children)
[–] [email protected] 2 points 2 months ago

Oh shit had already forgotten about this amid so many other scandals. The guy who said this is running the whole of US like a fucking medieval kingdom, another reality slap in the face. At that time I was like, "surely no one right in the mind would vote for this scammer".

[–] [email protected] 48 points 2 months ago (1 children)

The LLM told me that control rods were not necessary, so it must be true

[–] [email protected] 10 points 2 months ago

The chatbot said 3.6 Roentgen is just fine and the core cannot have exploded, maybe we heard a truck driving by

[–] cyrano 43 points 2 months ago
[–] [email protected] 30 points 2 months ago (2 children)

Finally we get the sequel to "Chernobyl" ... Based in America...

[–] [email protected] 9 points 2 months ago

Live action at that

[–] [email protected] 6 points 2 months ago

They made the prequel already - wiki/Three_Mile_Island_accident.

[–] [email protected] 22 points 2 months ago

Can we not have the lying bots teaching people how to run a nuclear plant?

[–] [email protected] 17 points 2 months ago

Diablo Canyon

The nuclear power plant run by AI slop is located in a region called "Diablo Canyon".

Right. We sure this isn't an Onion article? ...actually no, it couldn't be, The Onion's writers aren't that lazy.

Fuckin whatever, I'm done for the night. Gonna head over to Mr. Sandman's squishy rectangle. ...bet you'll never guess what I'm gonna do there!!

[–] [email protected] 12 points 2 months ago* (last edited 2 months ago)

What could possibly go wrong?

[–] [email protected] 12 points 2 months ago (2 children)
[–] [email protected] 3 points 2 months ago

using AI in a nuclear plant at Diablo Canyon... it's so on the nose you'd say it's lazy writing if it were part of the backstory of some scifi novel.

[–] [email protected] -2 points 2 months ago (1 children)

Well, considering it's exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.

Ah. The horror.

[–] [email protected] 12 points 2 months ago (1 children)

When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.

I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it's not just wasting the submitter's time, but other people's as well.

[–] [email protected] -1 points 2 months ago (1 children)

All the errors you know about in the nuclear power industry are human-caused.

Is this an industry with a 100% successful operation rate? Not at all.

But have you ever heard of a piece of paperwork with an error submitted to regulatory officials and lawyers outside the plant causing a critical issue inside the plant? I sure haven't. Please feel free to let me know if you are aware of such an incident.

I would encourage you to learn more about how LLM and SLM structures work. This article is more of a nothingburger superlative clickbait IMO. To me, at least it appears to be airgapped if it's running locally, which is nice.

I would bet money that this will be entirely managed by the most junior compliance person who is not 120 years old, with more senior folks cross checking it with more suspicion than they would a new hire.

[–] [email protected] 8 points 2 months ago

I'm not sure if that opening sentence is fatuous or not. What errors in any industrial enterprise are not human in origin?

[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (2 children)

to people who say it's just paperwork or whatever it doesn't matter: this is how it begins. they'll save a couple cents here and there and they'll want to expand this.

[–] [email protected] 3 points 2 months ago

Also, it's not like the paperwork isn't important.

[–] [email protected] 0 points 2 months ago (3 children)

That’s textbook slippery slope logical fallacy.

[–] [email protected] 2 points 2 months ago

Slippery slope arguments aren't inherently fallicious.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

it's not actually. there's barely an intermediate step between what's happening now and what I'm suggesting it will lead to.

this is not "if we allow gay marriage people will start marrying goats". it's "if this company is allowed to cut corners here they'll be cutting corners in other places". that's not a slope; it's literally the next step.

slippery slope fallacy doesn't mean you're not allowed to connect A to B.

[–] [email protected] 0 points 2 months ago (1 children)

You may think it’s as plausible as you like. Obviously you do or you wouldn’t have said it. It’s still by definition absolutely a slippery slope logical fallacy. A little will always lead to more, therefore a little is a lot. This is textbook. It has nothing to do with companies, computers, or goats.

[–] [email protected] 0 points 2 months ago

this is textbook fallacy fallacy

[–] [email protected] 1 points 2 months ago (1 children)

True, but it you change the argument from "this will happen" to "this with happen more frequently" then it's still a very reasonable observation.

[–] [email protected] 1 points 2 months ago

All predictions in this vein are invalid.

If you want to say “even this little bit is unsettling and we should be on guard for more,” fine.

That’s different from “if you think this is only a small amount you are wrong because a small amount will become a large amount.”

[–] [email protected] 4 points 2 months ago

Fucking christ..

[–] [email protected] 3 points 2 months ago

SkyNet is fully operational, operating at 60 teraflops.

[–] [email protected] 1 points 2 months ago

Dave, I don't known what to tell you but you can't come in alright?

[–] [email protected] -5 points 2 months ago (1 children)

One "Oops!" and humanity's gone for...

[–] [email protected] 0 points 2 months ago (1 children)

Tell me you misunderstood what the article was about without telling me that you misunderstood what the article was about

[–] [email protected] 1 points 2 months ago

Brother, it's a good day to avoid laughing at a bad joke but at least understanding that it WAS a joke. Have a good day, brother..🫂