ayaya

joined 2 years ago
[–] [email protected] 86 points 2 years ago (6 children)

To be fair it's the exact same bypass as any other Steam game. Any steam emulator would work.

[–] [email protected] 4 points 2 years ago (3 children)

Maybe that was a bad example to use, that is my bad. It was just the first thing I thought of. The government needs to define all sorts of things, not just criminal acts. You say it's being human. They even define what a human is.. Laws have to be written in such a way as to include explicit definitons so they can be enforced without loopholes. (Or in some cases create loopholes like with the rich and taxes)

[–] [email protected] 8 points 2 years ago (5 children)

Not that I agree with what is happening but they are defining it in legal terms, which is absolutely their job. A simple example might be killing someone is just killing someone, and the government defines what is murder and what is manslaughter.

[–] [email protected] 7 points 2 years ago (6 children)

GPUs are at pretty reasonable prices if you buy used. The 3060 Ti is $237 and it beats the 4060.

[–] [email protected] 20 points 2 years ago* (last edited 2 years ago)

For codecs it is highly dependent on the release group. For 4K it is the only valid option, but for 1080p a lot of groups make their x265 encodes too small and sacrifice quality. Take a look at the group rankings in the trash guides for sonarr and radarr for a general idea if who is the best/worst.

As for Tdarr, you should only really use it for audio and subtitle processing. For one you should not re-encode video so unless you're starting with remuxes you're further degrading video that is already degraded. And for two it's best left to the people who know what settings to tweak for each movie or episode. There is no universal setting that works well for everything so while you might be able to get acceptable quality with automation it's never going to be great. The best groups already took the time and effort to get it right so you might as well get those and save yourself the time/electricity.

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago)

That was part of my reason for linking it, and also why I put "convert" in quotes. It really is just Arch pre-configured and with some themes and some extra utilities.

I actually didn't know they had their own repo until I took a look yesterday and not only is it tiny but it seems to be mostly themes, configs, and/or tools. I don't think they even have alt versions of existing packages, just additions.

[–] [email protected] 24 points 2 years ago* (last edited 2 years ago) (5 children)

A major difference is Manjaro has its own repos which has a tendency to break AUR packages, while EndeavourOS uses the normal Arch repos. Endeavour is pretty much just pre-configured Arch so it bypasses a lot of the issues with security and stability that Manhjaro suffers from.

IMO I still think people should just use vanilla Arch so they can customize everything to the fullest but EndeavourOS is a decent option.

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago) (1 children)

I was trying to make it as simple as possible. The format is irrelevant. The model is still storing nothing but weights at the end of the day. Storing the relationships between words and sentences is not the same thing as storing works in a different format which is what your original comment implied.

[–] [email protected] 5 points 2 years ago (1 children)

That's not how the autism spectrum works. For one what you're describing is a gradient not a spectrum and for two the autism spectrum is a spectrum of autism, not a spectrum of everything. To be on the spectrum you have to be autistic.

[–] [email protected] 6 points 2 years ago* (last edited 2 years ago) (5 children)

But it's not just converting them into a different format. It's not even storing that information at all. It can't actually reproduce anything from the dataset unless it is really small or completely overfitted, neither of which apply to GPT with how massive it is.

Each neuron, which represents a word or a phrase, is a set of weights. One source makes a neuron go up by 0.000001% and then another source makes it go down by 0.000001%. And then you repeat that millions and millions of times. The model has absolutely zero knowledge of any specific source in its training data, it only knows how often different words and phrases occur next to each other. Or for images it only knows that certain pixels are weighted to be certain colors. Etc.

[–] [email protected] 1 points 2 years ago

Yep I use the binhex container too, makes everything really easy to set up.

view more: ‹ prev next ›