mbtrhcs

joined 11 months ago
[–] [email protected] 1 points 1 week ago (1 children)

Yeah sure, you found the one notorious TypeScript feature that actually emits code, but a) this feature is recommended against and not used much to my knowledge and, more importantly, b) you cannot tell me that you genuinely believe the use of TypeScript enums – which generate extra function calls for a very limited number of operations – will 5x the energy consumption of the entire program.

[–] [email protected] 1 points 1 week ago

Dachte schon das wird ein Artikel zu autofreundlicher Stadtplanung und der Verdrängung der Fußgänger, dann hätte ich sofort zugestimmt. Schade, dass es aber wohl nur eine Art überausformulierter Duschgedanke war.

[–] [email protected] 1 points 1 week ago (3 children)

Only if you choose a lower language level as the target. Given these results I suspect the researchers had it output JS for something like ES5, meaning a bunch of polyfills for old browsers that they didn't include in the JS-native implementation..

[–] [email protected] 8 points 1 week ago

I'm an empirical researcher in software engineering and all of the points you're making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their "own" work

[–] [email protected] 5 points 3 weeks ago

more like MCETAPB (most cops enable, tolerate and protect bastards)

[–] [email protected] 2 points 3 weeks ago

At least on a Mac keyboard, the en dash is also alt+hyphen and the em dash is shift+alt+hyphen.

[–] [email protected] 5 points 4 weeks ago* (last edited 4 weeks ago)

Even worse, the pilots and the airlines didn't even know the sensor or associated software control existed and could do that.

[–] [email protected] 2 points 4 weeks ago

let's see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up

Yeah, that's probably the best way to go about it, but still requires some foundational knowledge on your part. For example, in a recent study I worked on we found that programming students struggle hard when the LLM output is wrong and they don't know enough to understand why. They then tend to trust the LLM anyways and end up prompting variations of the same thing over and over again to no avail. Other studies similarly found that while good students can work faster with AI, many others are actually worse off due to being misled.

I still see them largely as black boxes

The crazy part is that they are, even for the researchers that came up with them. Sure we can understand how the data flows from input to output, but realistically not a single person in the world could look at all of the weights in an LLM and tell you what it has learned. Basically everything we know about their capabilities on tasks is based on just trying it out and seeing how well it works. Hell, even "prompt engineers" are making a lot of their decisions based on vibes only.

[–] [email protected] 2 points 1 month ago (2 children)

I don't know if it's just my age/experience or some kind of innate "horse sense" But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth

I'm not sure how you would do that if you are asking about something you don't have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

Perhaps with a reasonable prompt an LLM can be more honest about when it's hallucinating?

So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM's actual internal state.

[–] [email protected] 5 points 1 month ago

kind've

Ok not to be nitpicky but this is the first time I've ever seen the opposite (complementary?) mistake to "could of". That's actually kinda fun :D

[–] [email protected] 2 points 1 month ago

Oh shoot my bad haha

[–] [email protected] 2 points 1 month ago (2 children)

.. why did you have ChatGPT write this? Clearly you have your own thoughts on this no need to ask a machine lol

12
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

Moin liebe Community,

ich habe es leider versäumt meine feddit.de Abonnements rechtzeitig irgendwo zu speichern. Da feddit.de jetzt offensichtlich endgültig weg ist, gibt es noch irgendeine Möglichkeit die Liste der Abonnements zu bekommen, die ich auf dem Account hatte?

Ich bin nicht allzu optimistisch, aber ich dachte vielleicht werden Abonnements ja auch irgendwo im Fediverse weitergetragen, deswegen frag ich trotzdem mal.

Danke für jede Hilfe :)

Edit: hier geht es noch, falls jemand anders die gleiche Frage hat :) danke @caos für den Tipp!

view more: next ›