patatahooligan

joined 2 years ago
[–] patatahooligan@lemmy.world 12 points 3 days ago (2 children)

Sign the petition even if it's surpassed 1mil signatures by the time you read this! The signatures will be verified after the petition is complete. This could lead to removal of any number of them. We don't want to barely make it. Let's go as high as possible!

[–] patatahooligan@lemmy.world -4 points 6 days ago

That's just like your opinion man.

[–] patatahooligan@lemmy.world 6 points 6 days ago (4 children)

Yeah for sure there's ton of clickbait, but this isn't "a minor technical matter". The news here isn't the clash over whether the patch should be accepted in the RC branch, but the fact that Linus said he wants to remove bcachefs from the kernel tree.

[–] patatahooligan@lemmy.world 5 points 1 week ago

"Fair use" is the exact opposite of what you're saying here. It says that you don't need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.

[–] patatahooligan@lemmy.world 2 points 1 week ago

Of course they're not "three laws safe". They're black boxes that spit out text. We don't have enough understanding and control over how they work to force them to comply with the three laws of robotics, and the LLMs themselves do not have the reasoning capability or the consistency to enforce them even if we prompt them to.

[–] patatahooligan@lemmy.world 6 points 3 weeks ago

I'm sure many people don't even think about that. Having to reinstall all your packages from scratch is not something they do frequently.

And for the people who are looking to optimize the initial setup, there are many ways to do it without a declarative package manager. You can:

  • Write a script for your initial setup that includes installing packages
  • Use a tool like ansible
  • Use meta-packages
  • Export your currently installed packages to a file and pass that to the package manager on the new installation
[–] patatahooligan@lemmy.world 1 points 3 weeks ago

Many times these keys are obtained illegitimately and they end up being refunded. In other cases the key is bought from another region so the devs do get some money, but far less than they would from a regular purchase.

I'm not sure exactly how the illegitimate keys are obtained, though. Maybe in trying to not pay the publisher you end up rewarding someone who steals peoples' credit cards or something.

[–] patatahooligan@lemmy.world 20 points 1 month ago (2 children)

They work the exact same way we do.

Two things being difficult to understand does not mean that they are the exact same.

[–] patatahooligan@lemmy.world 2 points 1 month ago (2 children)

NVMEs are claiming sequential write speeds of several GBps (capital B as in byte). The article talks about 10Gbps (lowercase b as in bits), so 1.25GBps. Even with raw storage writes the NVME might not be the bottleneck in this scenario.

And then there's the fact that disk writes are buffered in RAM. These motherboards are not available yet so we're talking about future PC builds. It is safe to say that many of them will be used in systems with 32GB RAM. If you're idling/doing light activity while waiting for a download to finish you'll have most of your RAM free and you would be able to get 25-30GB before storage speed becomes a factor.

[–] patatahooligan@lemmy.world 1 points 1 month ago (1 children)

So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren't. How are you getting a the hidden cells with blkdiscard?

The idea is that blkdiscard will tell the SSD's own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?

I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

After reading all of this I would just do both... Each method fails in different ways so their sum might be better than either in isolation.

But the actual solution is to always encrypt all of your storage. Then you don't have to worry about this mess.

[–] patatahooligan@lemmy.world 1 points 1 month ago (3 children)

I don't see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd will exit because it reached the end of the visible device space but blocks will remain untouched internally.

The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked "in most cases", but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.

[–] patatahooligan@lemmy.world 1 points 1 month ago (5 children)

in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data.

Your conclusion is incorrect because you made the assumption that the SSD has exactly the advertised storage or infinite storage. What if it's over-provisioned by a small margin, though?

20
submitted 2 years ago* (last edited 2 years ago) by patatahooligan@lemmy.world to c/linux@lemmy.ml
 

I have an SSD from a PC I no longer use. I need to keep a copy of all its data for backup purposes. The problem is that dd reports "Input/output error"s when copying from the drive. There seem to be 20-30 of them in the entire 240GB drive so it is likely that most or all of my data is still intact.

What I'm concerned about is whether these input/output errors can cause issues in the image outside of the particular bad blocks. How does dd handle these errors? Will they be eg zeroed in the output or will the simply be missing? If they are simply missing will the filesystem be corrupted because the location of data has been shifted? If so, what tool should I be using to save what can be saved?

EDIT: Thanks for the help guys. I went with ddrescue and it reports to have saved 99.99% of the data. I guess there could still be significant loss if the 0.01% happens to be on filesystem structures, but in this case maybe I can use an undeleter or similar utility to see if I can get back the files. In any case, I can work at my leisure now that I have a copy of the data on non-failing storage.

 

cross-posted from: https://kbin.social/m/steamdeck@sopuli.xyz/t/21836

Big improvements and new features for the Steam Desktop client are now out of Beta!

view more: next ›