On 31/01/2011 23:52, Keld Jørn Simonsen wrote:
raid1+0 and Linux MD raid10 are similar, but significantly different
in a number of ways. Linux MD raid10 can run on only 2 drives.
Linux raid10,f2 has almost RAID0 striping performance in sequential read.
You can have an odd number of drives in raid10.
And you can have as many copies as you like in raid10,
You can make raid10,f2 functionality from raid1+0 by using partitions.
For example, to get a raid10,f2 equivalent on two drives, partition them
into equal halves. Then make md0 a raid1 mirror of sda1 and sdb2, and
md1 a raid1 mirror of sdb1 and sda2. Finally, make md2 a raid0 stripe
set of md0 and md1.
If you have three disks, you can do that too:
md0 = raid1(sda1, sdb2)
md1 = raid1(sdb1, sdc2)
md2 = raid1(sdc1, sda2)
md3 = raid0(md0, md1, md2)
As far as I can figure out, the performance should be pretty much the
same (although wrapping everything in a single raid10,f2 is more
convenient).
For four disks, there are more ways to do it:
Option A:
md0 = raid1(sda1, sdb2)
md1 = raid1(sdb1, sdc2)
md2 = raid1(sdc1, sdd2)
md3 = raid1(sdd1, sda2)
md4 = raid0(md0, md1, md2, md3)
Option B:
md0 = raid1(sda1, sdb2)
md1 = raid1(sdb1, sda2)
md2 = raid1(sdc1, sdd2)
md3 = raid1(sdd1, sdc2)
md4 = raid0(md0, md1, md2, md3)
Option C:
md0 = raid1(sda1, sdc2)
md1 = raid1(sdb1, sdd2)
md2 = raid1(sdc1, sda2)
md3 = raid1(sdd1, sdb2)
md4 = raid0(md0, md1, md2, md3)
"Ordinary" raid 1 + 0 is roughly like this:
md0 = raid1(sda1, sdb1)
md1 = raid1(sda2, sdb2)
md2 = raid1(sdc1, sdc1)
md3 = raid1(sdc2, sdd2)
md4 = raid0(md0, md1, md2, md3)
I don't know which of A, B or C is used for raid10,f2 on four disks -
maybe Neil knows?
The fun thing here is to try to figure out the performance for these
combinations. For large reads, A, B and C will give you much better
performance than raid 1 + 0, since you can stream data off all disks in
parallel. For most other accesses, I think the performance will be
fairly similar, except for medium write sizes (covering between a
quarter and half a stripe), which will be faster with C since all four
disks can write in parallel.
All four arrangements support any single disk failing, and B, C and
raid1+0 have a 66% chance of supporting a second failure.
I don't think there is any way you can get the equivalent of raid10,o2
in this way. But then, I am not sure how much use raid10,o2 actually is
- are there any usage patterns for which it is faster than raid10,n2 or
raid10,f2?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html