Robert L Mathews wrote, On 17.04.2013 00:44:
the endless reports of complete array failures that appear on the list
with RAID 5 and even RAID 6 (a recent topic, I note, was "multiple
disk failures in an md raid6 array"). I almost never see anyone
reporting complete loss of a RAID 1 array.
Correct
The fundamental difference between RAID 1 and other levels seems to be
that the usefulness of an individual array member doesn't rely on the
state of any other member. This vastly reduces the impact of failures
on the overall system. After using mdadm with various RAID levels
since 2002 (thanks, Neil), I'm convinced that RAID 1 is by its very
nature far less fragile than any other scheme. This belief is sadly
reinforced almost every week by a new tale of woe on the mailing list.
Exactly.
However, I think the RAID5 problems are caused by bad design decisions
in the md implementation, not in the inherent concept of RAID5, though.
Many people seem to have problems getting to the data of their RAID5
array, although they have enough disks that are readable, but they can't
convince md to read it. RAID1 doesn't have that problem, because you can
ignore md when reading them. This is a home-made problem of Linux md.
FWIW, my own 10 years of experience with Linux md RAID led to the same
conclusion as you had.
See thread "md dropping disks too early"
Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html