V0ldek

joined 2 years ago
[–] [email protected] 7 points 1 year ago

They said they would if the actual audio is ever released, so far we only have the transcript...

[–] [email protected] 7 points 1 year ago

Using it for ML training would also be illegal in the EU under GDPR.

But this already exists. My colleague had to submit a video self-interview when applying to Goldman Sachs, the pillar of morality and ethics in the corporate world.

[–] [email protected] 7 points 1 year ago

If you put GPUs into an MRI it would definitely be a sight to behold.

[–] [email protected] 11 points 1 year ago

I bet you don’t even understand something as basic as how BGP makes the internet work

You'd bet correctly, I believe packets are moved around the wired network by gnomes and the wireless network by fairies. What I don't do, however, is confidently tell students lies about a topic I don't understand, which happens to be an AI chat's job description.

[–] [email protected] 7 points 1 year ago (1 children)

The description of the algorithm is correct, although I'm not sure how much easier it is to understand if you call everything "thing". A graph is a really easy thing to explain, it's circles and lines between them, you can just call it circles and lines, it's okay. The pseudocode section completely changes the style out of nowhere which is bizzare. But it doesn't include any explanation really, it just presents the method, which ChatGPT was more-or-less able to do.

[–] [email protected] 9 points 1 year ago (1 children)

Bing Chat tried to annotate claims with references to websites and the result was predictable, it says bullshit and then plugs a reference that doesn't actually substantiate what it said.

[–] [email protected] 13 points 1 year ago (10 children)

The issue with those nitpicks is that you need to already know about Dijkstra to pick up that something is fishy and ask for clarification.

I call bs on all of this, if anything my little experiment shows that sure, you can ask it for clarification (like giving a counterexample) and it will happily and gladly lie to you.

The fact that an LLM will very politely and confidently feed you complete falsehoods is precisely the problem. At least StackOverflow people don't give you incorrect information out of rudeness.

[–] [email protected] 16 points 1 year ago* (last edited 1 year ago) (1 children)

One thing I didn't focus on but is important to keep in context is that the cost of a semi-competent undergrad TA is like couple k a month, a coffee machine, and some pizza; whereas the LLM industrial complex had to accelerate the climate apocalypse by several years to reach this level of incompetence.

Sam Altman personally choked an orca to death so that this thing could lie to me about graph theory.

[–] [email protected] 8 points 1 year ago

God I hate this place

[–] [email protected] 8 points 1 year ago (5 children)

Is my delving not to your satisfaction?

[–] [email protected] 8 points 1 year ago

Try reading something like Djikstra’s algorithm on Wikipedia, then ask one to explain it to you.

I did! I feel entitled to compensation now!

[–] [email protected] 13 points 1 year ago

Microsoft does dogfood all the things actually, which I think is one of the good aspects of the company.

We used the experimental versions of Teams, all Azure changes were first deployed to the part of the cloud used by MSFT, etc. Even new C#/.NET versions are first ran through internal projects.

view more: ‹ prev next ›