this post was submitted on 14 Jul 2025
601 points (98.5% liked)

Technology

72865 readers
2300 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 16 hours ago

finally i'll be able to self-host one piece streaming

[–] [email protected] 3 points 13 hours ago (1 children)

Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive

[–] [email protected] 2 points 13 hours ago* (last edited 7 hours ago)

if I'm still alive

That goes without saying, unless you anticipate something. Do you?

[–] [email protected] 6 points 16 hours ago* (last edited 16 hours ago)

Finally, a hard drive which can store more than a dozen modern AAA games

[–] [email protected] 10 points 19 hours ago

my qbittorrent is gonna love that

[–] [email protected] 13 points 1 day ago

Great, can't wait to afford it in 60 years.

[–] [email protected] 24 points 1 day ago (2 children)

I'm amazed it's only $800. I figured that shit was gonna be like 8-10 thousand.

[–] [email protected] 9 points 1 day ago (1 children)

Well, it's a Seagate, so it still comes out to about a hundred bucks a month.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 18 points 1 day ago (1 children)

Me who stores important data on seagate external HDD with no backup reading the comments roasting seagate:

load more comments (1 replies)
[–] [email protected] 20 points 1 day ago (1 children)

This hard drive is so big that when it sits around the house, it sits around the house.

[–] [email protected] 11 points 1 day ago (1 children)

This hard drive is so big when it moves, the Richter scale picks it up.

[–] [email protected] 14 points 1 day ago (4 children)

This hard drive is so big when it backs up it makes a beeping sound.

load more comments (4 replies)
[–] [email protected] 9 points 1 day ago (13 children)

What is the usecase for drives that large?

I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.

[–] [email protected] 2 points 12 hours ago

It's like the petronas towers, everytime they're finished cleaning the windows they have to start again

[–] [email protected] 6 points 19 hours ago* (last edited 19 hours ago) (1 children)

Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?

[–] [email protected] 1 points 19 hours ago* (last edited 19 hours ago) (1 children)

The pool is about 20 usable TB.

[–] [email protected] 7 points 16 hours ago

Something is very wrong if it's taking 2 weeks to scrub that.

[–] [email protected] 3 points 16 hours ago

Data centers???

[–] [email protected] 5 points 19 hours ago* (last edited 19 hours ago)

Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) .... scrubbing takes less than 3 days.

[–] Appoxo 6 points 22 hours ago (1 children)

High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability.

[–] [email protected] 2 points 13 hours ago (1 children)

Not necessarily.

The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.

I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.

[–] Appoxo 1 points 13 hours ago (1 children)

I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.

[–] [email protected] 2 points 12 hours ago

If there's higher redundancy, then they are already giving up on density.

We've pretty much covered the likely ways to calculate parity.

[–] [email protected] 10 points 1 day ago (2 children)

there was a time i asked this question about 500 megabytes

[–] [email protected] 2 points 20 hours ago

I am not questioning the need for more storage but the need dor more storage without increased speeds.

[–] [email protected] 5 points 1 day ago (2 children)
load more comments (2 replies)
[–] [email protected] 6 points 1 day ago (1 children)

There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it's not for consumers.

[–] [email protected] 7 points 23 hours ago (1 children)

That's a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it's hard to actually make use of it all.

For example, let's say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you're talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.

[–] [email protected] 1 points 19 hours ago (1 children)

Thats exactly what I wanted to say, yes :D.

[–] [email protected] 1 points 14 hours ago (1 children)

I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.

[–] [email protected] 1 points 12 hours ago

I would think most standard consumers are not using HDDs at all.

[–] [email protected] 6 points 1 day ago

It's to play Ark: Survival Evolved.

[–] [email protected] 4 points 1 day ago (1 children)
[–] [email protected] 8 points 1 day ago (1 children)

A ZFS Scrub validates all the data in a pool and corrects any errors.

[–] [email protected] 1 points 13 hours ago (1 children)

I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?

[–] [email protected] 3 points 11 hours ago* (last edited 11 hours ago) (1 children)

You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.

Accessing the data does not need a scrub, it is only a routine maintenance task. A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.

There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.

ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.

[–] [email protected] 2 points 1 hour ago

Thank you for all this information. One day when my ADHD forces me into a making myself a home server I'll remember this and keep it in mind. I've always wanted to store movies but these days just family pictures and stuff. Definitely don't have terabytes but I'm getting up 100s of gb.

[–] [email protected] 3 points 1 day ago

I worked on a terrain render of the entire planet. We were filling three 2 Tb drives a day for a month. So this would have been handy.

load more comments (3 replies)
load more comments
view more: next ›