this post was submitted on 19 Aug 2023
17 points (100.0% liked)

datahoarder

8253 readers
2 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 5 years ago
MODERATORS
 

My current setup is an Odroid H3+ with 2x8TB hard drives configured in raid 1 via mdadm. It is running Ubuntu server and I put in 16GB of RAM. It also has a 1TB SSD on it for system storage.

I'm looking to expand my storage via an attached storage drive enclosure that connects through a USB 3 port. My ideal enclosure has 4 bays with hardware raid support, which I plan to configure in raid 10.

My main question: How do I pool these together so I don't need to balance between the two? Ideally all binded to appear as a single mount point. I tried looking at mergefs, but I'm not sure that it would work.

I ask primarily because I write scripts which moves data to the current raid 1 setup. As i expand the storage, I want it to continue working without worrying about checking for space between the two sets.

Be real with me - is this dumb? What would you suggest given my current stack?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 5 points 2 years ago* (last edited 2 years ago) (1 children)

I wouldn't mix SW and HW raid, if you can, try to stick with just one or the other. By mixing them you'll open yourself up to a whole can of worms if there's a disk failure IMO

If this was BTRFS (or to a lesser extent, ZFS), then the solution would be to chuck the external USB enclosure into JBOD mode, and just add the disks to the array, where the storage will grow automatically in relation to the newly added disk, and you wouldn't need to change your script...

Since you're using mdadm, I believe you'll need to add two new disks, but I'm not too sure if you can expand the space on the existing mountpoint without using a filesystem merging tool

Edit: accidentally submitted early

[โ€“] [email protected] 2 points 2 years ago

Thanks for the ideas - at first i was thinking i could simplify by having the disk enclosure mount as a single drive and then I would just need to merge my existing software setup. Im getting pretty unanimous feedback that this is not a good idea.

I'll dive deeper into mdadm's documentation and see if I can do some magic here. I realize it's not the most elegant solution, but I'd really prefer keeping my existing setup and add to it.

Thanks again for your input, I appreciate it!