Do they? As long as you use RAII and modern shit and keep to something like GCC, it should be safe, right?
I don't do C++ these days
Do they? As long as you use RAII and modern shit and keep to something like GCC, it should be safe, right?
I don't do C++ these days
You should also really be using the latest chainsaw model with new safety features, but your workplace swears by the gas guzzling piece of shit from 1996
That's "Bare" NVMe, the linux kernel supports such devices but I really fail to see the fucking point™
Apple of course probably did so to fuck the consumer
NO.
SATA(N) IS BANISHED FROM THIS HOUSE
Edit: wait actually this is dumb. Isn't every single modern drive IDE, as in they have their controller onboard? The 40 pin connector is PATA
If you don't have control over the finer movements of your car, parallel park is a pretty good way to weed it out. And if they don't, you're gonna fuck up harder in a place that actually matters.
Fuck up when it doesn't matter so you don't fuck up when it matters.
To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future
The anti natalist dude who attacked a fertilization center was a lemmy radical.
Want a source? Look at the SS Headquarters (also known as lemmy.world)
Nah Lemmy in particular is a worse dump than Miyazaki's poison swamps. The level of zeal on lemmy is staggering (I mean, it's already resulted in one terrorist attack)
I feel like this is because it's much smaller than alternatives. It starts to feel like you're circlejerking the same dicks every day.
You need to actively have the relevant code in context.
I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.
You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it's in the context and you're fucked.
Alternatively you could edit the context, correct the parameters and then run it again.
On the other side of the shit aisle
Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It's so much like me in not having any self worth or confidence.
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.
It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense
Bourgeoisie propaganda