[kwlug-disc] Filesystems for backups
L.D. Paniak
ldpaniak at fourpisolutions.com
Wed Aug 7 21:57:37 EDT 2019
OK. Let me try and take a swing at this...
I will pitch a ZFS on Ubuntu option. I think that covers
well-supported/easy to admin.
Plenty of ZFS documentation out there (thanks Sun!) and ZFS on Linux
mailing list is helpful.
A useful tool is the ServeTheHome RAID calculator which does ZFS:
https://www.servethehome.com/raid-calculator/
Other preliminaries...
1) A ZFS pool is made up of vdevs striped together. As Mikalai points
out elsewhere, a vdev can be a single drive or a mirror or a
RAIDZ[1,2,3] group.
vdevs are independent objects: You can stripe across N vdevs of
different types. The total available capacity is sum of individual
capacities. Better performance if capacities are the same.
2) For individual drives bigger than 2TB, I think RAIDZ2 is a
requirement. Especially with used HDDs.
3) Can your drive shelf handle drives with 4K sectors? I tend to doubt
it. In that case, you will probably never use a drive >4TB in your array.
4) You will never add single drives to your ZFS array to expand
capacity. You will only ever concatenate (RAIDZ2) vdevs, build a second
pool with different vdev configuration, or replace all drives in a vdev
with larger drives.
Configuration options to get 15-20TB today with 1 and 3TB drives:
1) Forget the 1TB drives. If you have 10x 3TB drives, build a ZFS pool
with a single (8+2) RAIDZ2 vdev.
That should get you about 20TB usable (knock 10% off what the RAID
calculator says - ZFS is not capacity efficient - data consistency has a
cost!)
If you want to expand: Build a second pool with (6+2) 4TB drives and
ZFS send data across from old pool to new pool. Destroy original pool
and remove 3TB HDD. Expand new pool with (6+2) vdevs of 4TB drives at a
time.
That way you can get up to 60TB usable with 24x4TB HDD. Can also do with
(10+2) 4TB drives to get to 70TB at lower performance (more vdevs = more
random IOP performance).
2) Build 2x (10+2) vdev pool with 1TB HDD. That will give you 16TB
usable today.
To expand such a pool, replace and resliver 1TB drives in a vdev one at
a time. Once all drives in a vdev are replaced, the vdev capacity
can/will increase.
With 3TB drives, replacing all drives gives about 50TB. 4TB HDD, 70TB.
Samba plays well with ZFS eg. ZFS snapshots can be mapped to Windows
shadow copies.
Device removal is limited to mirror vdevs in ZoL at this time:
https://github.com/zfsonlinux/zfs/pull/6900#issuecomment-501021145
On 8/7/19 5:49 PM, Paul Nijjar via kwlug-disc wrote:
> I have a bunch of drives (mostly 1TB and 3TB) and one of those 24-bay
> disk array shelves I was writing about before. I want to have a big
> contiguous pool of storage onto which I can store backups. Currently I
> need something like 15-20TB of storage to feel comfortable, but this
> may grow over time. I will be backing up using gigabit ethernet (which
> might make restores interesting in an emergency, but oh well).
>
>
> Here is my dream setup:
> - One big storage pool
> - Able to serve up SMB shares for inferior operating systems to use as
> backup targets
> - Able to add new drives as I need them without worrying about drive
> sizes too much
> - Able to replace small drives in the array with bigger ones and use
> the additional capacity easily
> - Able to use a lot of the space on the drives (so full redundancy is
> not critical)
> - If a drive dies I don't lose everything (but can afford to lose what
> was on that one drive)
> - Well supported, preferably on Ubuntu
> - (Relatively) easy for others to maintain and use
> - Good power usage?
>
> I do not think I can achieve the dream. Here is what I have thought
> of/looked into so far:
>
>
> ZFS with FreeNAS or Ubuntu (using ZFS on Linux), raidz
> ------------------------------------------------------
>
> - Allows a fair amount of storage without full redundancy?
> - Does NOT allow me to add new drives easily. I can only add entire
> sets that match the existing raidz array, turning RAID5 into RAID50?
> - If a drive dies I can rebuild (with the usual RAID5 caveats)
> - ZFS on FreeNAS might be preferable to Ubuntu, but I might still go
> with Ubuntu
> - Good protection against corruption
> - Maybe difficult for others to administer?
>
>
> ZFS with FreeNAS/Ubuntu, raid10 (or mdadm)
> ------------------------------------------
>
> - Easier to add new drives as long as they are in pairs
> (Do the pairs have to be matched to previous pairs?)
> - Full redundancy which wastes space
> - If a drive dies I can rebuild it
> - Maybe it is hard/impossible to remove a pair of drives from the set
>
>
> ZFS with FreeNAS/Ubuntu, raid0 (or mdadm)
> -----------------------------------------
>
> - Maximize storage
> - But if one drive dies I lose everything
> - Maybe I can easily add to the array?
> - I don't think I can take anything out ever
>
>
> Plain LVM on Ubuntu
> -------------------
>
> - Maximize storage
> - Have one contiguous pool
> - What happens if one drive dies? Do I lose everything?
>
>
> Individual drives
> ------------------
>
> - I lose having a big contiguous pool for backups, which sucks.
> - Easy to add and remove drives, though
>
>
>
> In all cases I think I can serve up SMB 2/3 shares for inferior
> operating systems. Maybe I can do iSCSI something, but I do not know
> whether I should care about this.
>
> Are there other options I am missing or should consider?
>
> I presume (from talking to Lori) that ZFS is the thing I should lean
> towards for data integrity reasons, but I am open to people telling me
> that is overkill and I should use something else instead. It feels as
> if ZFS has the momentum in the storage filesystem wars, anyways (as
> compared to btrfs/xfs, or ReFS on Windows).
>
> A bunch of you are computer/storage geniuses. What do you do? What
> should I do?
>
> - Paul
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://kwlug.org/pipermail/kwlug-disc_kwlug.org/attachments/20190807/86147387/attachment.sig>
More information about the kwlug-disc
mailing list