[kwlug-disc] Docker Host Appliance

Doug Moen doug at moens.org
Mon Jan 16 18:43:19 EST 2023


BTRFS RAID56 is still experimental, and not to be used for production.
https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

> The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature *should not be used in production, only for evaluation or testing*. The power failure safety for metadata with RAID56 is not 100%.

This warning was in the documentation in 2016 when I decided to use ZFS, and the warning is still there. Don't use in production.

I decided in 2016 that I don't trust the BTRFS development process if they release features that are known to cause data loss. That was why I chose ZFS for my archive disk cluster. I'm not worried about BTRFS root on a single disk, since that is widely considered to be reliable (or good enough), and I've since deployed that a couple times now.

Given the bad optics of the long lived "experimental" warnings for released features in the official BTRFS docs, I fantasize that the BTRFS team would have already shipped the new ZFS feature, since the code was finished spring of last year. I'm happy to see that the new ZFS feature is in fact still quarantined in a branch under an open pull request, and I want it to stay there until the test coverage meets ZFS standards. Whether my feelings about this match reality, I can't say.

Doug.

On Mon, Jan 16, 2023, at 5:45 PM, Chris Irwin via kwlug-disc wrote:
> On Sun, Jan 15, 2023, at 21:24, Doug Moen wrote:
>> ZFS has a limitation where you can not add disks to a RAIDz vdev.
> 
> This is the big one, off the top of my head.
> 
> I'm specifically listing my ZFS gripes here. I don't totally hate the filesystem. It has some features that are pretty good. Encryption is built-in, and basically a check-box when creating a dataset. BTRFS doesn't yet have FSCRYPT support, so encryption needs to be layered either above (ecryptfs) or below (dmcrypt). That sucks for a number of reasons.
> 
> Also, TrueNAS itself seems to have the best Web UI from the options I looked at, and is truly more of a "Storage Appliance" than anything other than an actual physical appliance (synology, etc). All the other options were much more "Debian, but with a basic web interface and a logo". TrueNAS actually feels like a real product.
> 
> To be fair, most of my issues are non-issues for the real target for ZFS: Businesses with money. If I had a formal budget, forecasted data growth, and probably upgraded servers/drives with warranty cycles, most of these issues are a non concern.
> 
> But I'm a home user. And a cheap one, at that. I have a bunch of data and a bunch of disks, and I want to be reasonably sure both the data (and backups) are valid.
> 
> Back in the day I had 2 drives. Eventually I wanted to expand, so I bought two more drives, and added them to the filesystem (was using btrfs). Run a rebalance, it shifts data around, and I have a bunch more storage capacity.
> 
> With ZFS, I don't think I can do that. I can't take my four drives, and turn it into a six-drive array. I'd either have to build a whole new larger array (vdev?) and migrate to that, then throw out the old disks. Or replace all the old disks with new, larger ones one-by-one, finally resizing the array once all the disks are larger. Then throw out the old disks. Or have a second array and split my data.
> 
> I've been using mdadm (+lvm) and btrfs for a lot of years, and with either of those, you can easily add disks and expand your array. You can switch redundancy levels on the fly, if you wanted. WIth BTRFS if I had four disks and one failed (and I have enough free space), I could rebalance the array to use one fewer drive and recover a measure of redundancy while waiting for stock/sale/shipping/payday.
> 
> ZFS also can't fully utilize mismatched disks, apparently. My 4-drive array has 2x6TB and 2x8TB drives, which means there's 2x2TB worth of unusable space on the 8TB drives. This worked fine with btrfs.
> 
> This is a "me" problem, but snapshots don't seem to be visible with ZFS, so recovering a file takes some extra steps with the TrueNAS Web UI to clone a snapshot. On btrfs, you can just 'cp' the file from your snapshot, no extra commands, tools, or effort required.
> 
> I also find the new terminology and new layers confusing (vdev, zpool, dataset, zvol, raidz, etc). Granted, this is a "me" problem as well, since I undestand pvs, vgs, and lvs just fine. I'll learn, I just wish I didn't have to. Also, why vdev & dataset, and not zdev and zdataset? No idea.
> 
> People claim ZFS handles databases/VMs better than btrfs, but I don't really see how since it appears to use the same COW semantics. Perhaps it's just hidden behind better caching.
> 
> -- 
> *Chris Irwin*
> 
> email:   chris at chrisirwin.ca
>   web: https://chrisirwin.ca
> 
> _______________________________________________
> kwlug-disc mailing list
> To unsubscribe, send an email to kwlug-disc-leave at kwlug.org
> with the subject "unsubscribe", or email
> kwlug-disc-owner at kwlug.org to contact a human being.
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kwlug.org/pipermail/kwlug-disc_kwlug.org/attachments/20230116/edb33132/attachment.htm>


More information about the kwlug-disc mailing list