ayaya
Wow, that is incredibly unfortunate timing.
As long as you still have access to the cli it should be fixable. If you want to still try to get to plasma 6 make sure you also enabled the core-testing
and extra-testing
repos in addition to kde-unstable
as per the wiki
If you enable any other testing repository listed in the following subsections, you must also enable both core-testing and extra-testing
I missed that little snippet when I first swapped over.
If you do yay kf6
you can install all of the framework-related packages which might also help fill out some missing dependencies. For me it's 1-71. You can do the same with yay plasma
and then choose the ones from kde-unstable (122-194 for me) but you will have to manually avoid the ones with conflicts like plasma-framework
.
But if you want to try and revert theoretically simply removing the testing and unstable repos and doing another sudo pacman -Syu
should get you back onto the older versions.
You can do sudo pacman -Syudd
where the dd
is for ignoring dependencies to force it through. But be aware this is basically asking for things to break. Some packages haven't been updated to the latest versions yet. For example dolphin wouldn't launch so I had to switch to dolphin-git
from the AUR.
I'm honestly not sure what you're trying to say here. If by "it must have access to information for reference" you mean it has access while it is running, it doesn't. Like I said that information is only available during training. Either you're trying to make a point I'm just not getting or you are misunderstanding how neural networks function.
I think you are confused, how does any of that make what I said a lie?
The important distinction is that this "database" would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.
It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.
It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.
And even then there is no "database" that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.
Where did I say it was an attack? Now you're projecting. I just find it frustrating when someone is being downvoted for something that is verifiably true. It wasn't directed at you specifically.
Bitcoin has been mined on ASICs since 2013. GPUs are outclassed when it comes to specialized hardware. The Reddit classic of downvoting a completely correct comment has carried over to Lemmy I see.
As someone who enjoyed Google Inbox before they killed it, it hurts to read this comment.
And the main reason they're cheaper is because all of them are data harvesting machines. What a fun world where even your habits are a commodity!