[kwlug-disc] Bundling HD's. LVM vs mdadm vs ???

unsolicited unsolicited at swiz.ca
Fri Apr 12 12:02:29 EDT 2013



On 13-04-12 10:55 AM, Khalid Baheyeldin wrote:
> On Fri, Apr 12, 2013 at 2:46 AM, unsolicited <unsolicited at swiz.ca
.
.
.
>
>         Is 3TB enough? Because there are reasonablly priced 3TB disks out
>         there.  I got two WD Green ones for backups.
>
>     It has been reported in this list that WD Greens have been problematic /
>     sleep issues.
>
> Had them for a few months, and have not noticed anything.

Thanks, good to know.

> I have them connected to the server using eSATA, and use dump to backup
> stuff them one disk then the other daily, and another full level 0 dump
> weekly, in addition to a weekly backup for stuff excluded from dump
> (mainly media)
>
> Yes, they are not beaten constantly, just 30 minutes a day or so. So
> different workload.

>
>     Since space requirements only ever go up, and cases only hold so many
>     drives (without additional expense), it seems prudent to be going with
>     the largest capacity drives available at the time. (I bought 4 2TB
>     drives not too many years ago, and now ... I'm looking for bigger ones
>     already!)
>
>
> But without N+1  for RAID 5 (software or hardware) or RAID 1 (full
> mirroring), I can't see how you can do a 2TB + 2TB = 4TB without risking
> losing data if one disk fails.

Because the disks are replicated off computer to other computers.
(Remember, high availability on any one replica isn't an issue. Believe 
me, if I only had 1 replica, I'd be running RAID5. Since I usually buy 
computers 2 at a time, hardware interoperability / interchangeability, 
diagnosis, it didn't make sense to RAID5 one and not the other - but 
with 2 computers, the cost of RAID, and high availability not a factor, 
it made more sense to replicate. I've yet to regret it - across multiple 
HD failures over years. And with two computers, even if 1 were a slow 
NAS tucked under the stairs, I have higher / faster availability than 
RAID would give me - no performance degradation at single disk failure.) 
[Granted, no archive history, but since replicas burp nightly, I've 
still got a day or two to retrieve an accidentally deleted file.]

So, the 4TB goes in to replace and expand a 2TB.

That 2TB gets added to the 2TB on another computer, giving it, now, 4TB 
(2+2).

Two 'sets' of 4TBs replicate each night.

One 2TB dies, all data still on other 4TB. Fix the 2TB lost, 
re-replicate, good to go.

One computer dies, MB gets fried, SUT (Stupid User Trick, e.g. rm 
/mnt/sda2/*), gets stolen, fire, water damage, surge, brick falls on it 
... other computer is still around. Redundant systems, not just 
redundant drives.

Throughout, everything is accessible (in some fashion), buying time to 
repair, and keeping operable in the mean time. Hardware failure ceases 
to be a 'stop the world while I fix this' event.


> I have the 3TB drives with a 3TB partition and filesystem on it.
>
> Basically, you use parted (not fdisk), and create a label as gpt.
>
> See here
>
> http://www.cyberciti.biz/tips/fdisk-unable-to-create-partition-greater-2tb.html

10-4.


Poking about, feels like zfs isn't ready for prime time on Kubuntu. LVS, 
part of the kernel these days, appears to have some gparted gui 
equivalents, system-config-lvm and kvpm. Apparently lvm does the mdadm 
parts itself, so I'll probably go with lvm when I get that far.

- Also, apparently, taking a whole drive for lvm avoids partition 
tables, and therefore the whole 2TB issue, entirely. msdos labels, gpt 
labels, or any other labels, no longer involved.

- also, apparently, fuse-zfs (zfs-fuse?) has been superseded to have 
direct i/o control from kernel, so speed has vastly improved. But still 
version 0.61, on Linux.


Not suggesting zfs doesn't make more sense 'commercially', or in a 
standalone NAS (running Solaris?), just not sure I'm ready to trust it 
at home, yet. Some day I may try FreeNAS and zfs or something, just not 
there yet.

OTOH, in a commercial environment, probably running blades on a fibre 
channel array, and this whole conversation becomes moot. Although even 
there, how you redundantly back up such amounts of data, let alone 
off-site ... probably gets rather expensive.




More information about the kwlug-disc mailing list