[kwlug-disc] Docker Host Appliance

Ronald Barnes ron at ronaldbarnes.ca
Mon Jan 16 19:23:27 EST 2023


Chris Irwin via kwlug-disc wrote on 16/01/2023 14.45:

> But I'm a home user. And a cheap one, at that. I have a bunch of
> data and a bunch of disks, and I want to be reasonably sure both the
> data (and backups) are valid.

Then ZFS is the only option - only it detects (and can *correct*)
corrupt data (if I understand correctly).



> Back in the day I had 2 drives. Eventually I wanted to expand, so I 
> bought two more drives, and added them to the filesystem (was using 
> btrfs). Run a rebalance, it shifts data around, and I have a bunch 
> more storage capacity.
> 
> With ZFS, I don't think I can do that. I can't take my four drives, 
> and turn it into a six-drive array. I'd either have to build a whole
>  new larger array (vdev?) and migrate to that, then throw out the old
>  disks.

I'm not very knowledgeable about ZFS, but https://2.5admins.com/ podcast
has Allan Jude (a ZFS dev from Hamilton) and Jim Salter (former writer
for Ars Technica), also a ZFS enthusiast, and it's almost a meme about 
how often they work ZFS into conversations.

 From listening to them, as I understand it, you'd take the 2 new disks,
make a vdev from them, then add that vdev to your pool and storage space
has expanded.


Corrections to my understanding welcome!



> I've been using mdadm (+lvm) and btrfs for a lot of years,
I use mdadm + lvm myself, but only through inertia. Adding btrfs to that
is never gonna happen; as Doug pointed out, it's not reliable.

And with layers upon layers (ext4 on lvm on mdadm), it still doesn't
achieve the features of ZFS at a single layer.


And that's not even mentioning block-level snapshots, which I don't
think lvm supports?

Even rsync falls far short of ZFS's ability to detect a single changed
block in a 1TB file and backup only that one block.


> You can switch redundancy levels on the fly,

That's something I don't think ZFS can do.



> if you wanted. WIth BTRFS if I had four disks and one failed (and I 
> have enough free space), I could rebalance the array to use one fewer
> drive and recover a measure of redundancy while waiting for 
> stock/sale/shipping/payday.

Sounds like a handy feature.

But again, with btrfs RAID5/6 *should not be used in production*.



> ZFS also can't fully utilize mismatched disks, apparently. My
> 4-drive array has 2x6TB and 2x8TB drives, which means there's 2x2TB
> worth of unusable space on the 8TB drives. This worked fine with
> btrfs.

I believe you could have 1 vdev of 6TB drives and 1 vdev of the
8TB drives together in a pool without losing the 2GB.



Again, I'm no ZFS expert, but the above is my understanding and if any
of it's wrong, happy to be corrected.


Cheers!

rb




More information about the kwlug-disc mailing list