[kwlug-disc] Docker Host Appliance
Chris Irwin
chris at chrisirwin.ca
Mon Jan 16 17:45:25 EST 2023
On Sun, Jan 15, 2023, at 21:24, Doug Moen wrote:
> ZFS has a limitation where you can not add disks to a RAIDz vdev.
This is the big one, off the top of my head.
I'm specifically listing my ZFS gripes here. I don't totally hate the filesystem. It has some features that are pretty good. Encryption is built-in, and basically a check-box when creating a dataset. BTRFS doesn't yet have FSCRYPT support, so encryption needs to be layered either above (ecryptfs) or below (dmcrypt). That sucks for a number of reasons.
Also, TrueNAS itself seems to have the best Web UI from the options I looked at, and is truly more of a "Storage Appliance" than anything other than an actual physical appliance (synology, etc). All the other options were much more "Debian, but with a basic web interface and a logo". TrueNAS actually feels like a real product.
To be fair, most of my issues are non-issues for the real target for ZFS: Businesses with money. If I had a formal budget, forecasted data growth, and probably upgraded servers/drives with warranty cycles, most of these issues are a non concern.
But I'm a home user. And a cheap one, at that. I have a bunch of data and a bunch of disks, and I want to be reasonably sure both the data (and backups) are valid.
Back in the day I had 2 drives. Eventually I wanted to expand, so I bought two more drives, and added them to the filesystem (was using btrfs). Run a rebalance, it shifts data around, and I have a bunch more storage capacity.
With ZFS, I don't think I can do that. I can't take my four drives, and turn it into a six-drive array. I'd either have to build a whole new larger array (vdev?) and migrate to that, then throw out the old disks. Or replace all the old disks with new, larger ones one-by-one, finally resizing the array once all the disks are larger. Then throw out the old disks. Or have a second array and split my data.
I've been using mdadm (+lvm) and btrfs for a lot of years, and with either of those, you can easily add disks and expand your array. You can switch redundancy levels on the fly, if you wanted. WIth BTRFS if I had four disks and one failed (and I have enough free space), I could rebalance the array to use one fewer drive and recover a measure of redundancy while waiting for stock/sale/shipping/payday.
ZFS also can't fully utilize mismatched disks, apparently. My 4-drive array has 2x6TB and 2x8TB drives, which means there's 2x2TB worth of unusable space on the 8TB drives. This worked fine with btrfs.
This is a "me" problem, but snapshots don't seem to be visible with ZFS, so recovering a file takes some extra steps with the TrueNAS Web UI to clone a snapshot. On btrfs, you can just 'cp' the file from your snapshot, no extra commands, tools, or effort required.
I also find the new terminology and new layers confusing (vdev, zpool, dataset, zvol, raidz, etc). Granted, this is a "me" problem as well, since I undestand pvs, vgs, and lvs just fine. I'll learn, I just wish I didn't have to. Also, why vdev & dataset, and not zdev and zdataset? No idea.
People claim ZFS handles databases/VMs better than btrfs, but I don't really see how since it appears to use the same COW semantics. Perhaps it's just hidden behind better caching.
--
*Chris Irwin*
email: chris at chrisirwin.ca
web: https://chrisirwin.ca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kwlug.org/pipermail/kwlug-disc_kwlug.org/attachments/20230116/2ce5b534/attachment.htm>
More information about the kwlug-disc
mailing list