[kwlug-disc] MDADM and RAID

Rashkae rashkae at tigershaunt.com
Wed Mar 3 13:46:19 EST 2010


John Van Ostrand wrote:
> ----- "Rashkae" <rashkae at tigershaunt.com> wrote:
>> John Van Ostrand wrote:
>>
>>> Those same disks as RAID0+1:
>>>
>>>         md0 : active raid0 sde1[3] sdd1[2] sdc1[1] sdb1[0]
>>>               286727680 blocks 64k chunks
>>>
>>> Performed the write at 513MB/s, total time 10 seconds
>>>
>> I'm a little out of my element here, so forgive possible ignorance.
>>
>> Where do you get the +1 from? All I see here is a single Raid 0 array,
>>
>> which should, by definition multiply max throughput by the number of 
>> drives (as you demonstrate), but provides no redundancy.
> 
> RAID 1 is, of course, mirroring. But it is only mirroring from one drive to another.
> 
> To achieve mirroring with more than two drives you need an alternate type of RAID. Thus RAID 0+1, or RAID 1+0 (or 10), which, according to Wikipedia are not synonymous. 
> 
> RAID 0+1 is a stripe set that is mirrored to another strip set. RAID 1+0 is a mirrored set of drives that is then striped across other mirror sets.
> 
> Apparently, Linux does 1+0, so my email should have stated 1+0
> 
> So what's the difference I thought, so I read carefully. If you're wondering, these are treated as nested RAID so at a code level it's actually treated as two different RAID sets, one using the other set(s) as "raw" drives. So if the base RAID set is 0 (as in 0+1) and a drive fails then the entire RAID0 set fails causing the mirror to be degraded. In other words a RAID 0+1 can sustain multiple drive failures and continue to operate *ONLY IF* all the drives fail in the same RAID 0 set. As soon as one drive fails in the other set the RAID 0+1 fails.
> 
> In a 1+0 set, if a drive fails the base mirror set is degraded and the upper RAID 0 still functions. A second drive can fail, as long as it's not in the same mirror set. That means that multiple drive failures only affect the RAID 1+0 if the both drives of a mirror fail. 
> 
> So on a 1+0 it's far less likely that multiple drive failures will cause the set to fail.
> 
> _______________________________________________
> kwlug-disc_kwlug.org mailing list
> kwlug-disc_kwlug.org at kwlug.org
> http://astoria.ccjclearline.com/mailman/listinfo/kwlug-disc_kwlug.org
> 

That much I understand, the part that I don't understand is:

md0 : active raid0 sde1[3] sdd1[2] sdc1[1] sdb1[0]

I would have expected something like:

md0: active raid1 sde1 add1
md1: active raid1 sdc1 sdb1
md2: active raid0 md1 md2

I've never actually created a 0+1 or 1+0 array, but it seems odd to me 
that it would appear to be a single raid0 array in the mdstat.




More information about the kwlug-disc mailing list