Drew Weaver wrote:
Software RAID has failed us so many times in the past that I would never recommend it to anyone. Things like: the raid breaking for no reason and the server continually rebuilding over and over, and once a drive does finally die the other drive wasn't being mirrored properly (or wouldn't boot even though we manually sync'd the bootloaders as suggested.).
Dunno, I have a box that has two of them IBM IDE Deathstar drives in RAID 1 mode and I still DO NOT have raid problems even though they have started down the road of self-destruction. Oh, BTW, it is a RH 7.1 install.
It has been nothing but a hassle, so if you need reliable data you need to find a card that works for you, I'm not sure why people are so ready to suggest software raid when the fact is its pretty unreliable.
Or maybe research the entire toolchains used by Linux software raid. Most IDE controllers will not tolerate a faulty device on the cable so my disks are the only devices on the entire channel. No problemo.
In contrast, on another box where I run Centos 4, i had to put a dying drive together with a new disk. Later on, the controller starting acting up because the drive started its death throes and so I got problems with the new mirror and with booting up in which the controller would not recognize the drives.
Taking the faulty drive off the channel resolved things. The problem here was not with Linux software raid but with the controller and that is a hardware problem, not even a kernel driver problem.
Oh, proprietary software raid drivers from HP, Promise and whoever are something else. If you use those, please go after the manufacturer's drivers and don't blame Linux's software raid drivers.
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos