On Sat, Feb 28, 2015 at 1:26 PM, Valeri Galtsev <galtsev@xxxxxxxxxxxxxxxxx> wrote: > Indeed. That is why: no LVMs in my server room. Even no software RAID. > Software RAID relies on the system itself to fulfill its RAID function; > what if kernel panics before software RAID does its job? Hardware RAID > (for huge filesystems I can not afford to back up) is what only makes > sense for me. RAID controller has dedicated processors and dedicated > simple system which does one simple task: RAID. Biggest problem is myriad defaults aren't very well suited for multiple device configurations. There are a lot of knobs in Linux and on the drives and in hardware RAID cards. None of this is that simple. Drives, and hardware RAID cards are subject to firmware bugs, just as we have software bugs in the kernel. We know firmware bugs cause corruption. Not all hardware RAID cards are the same, some are total junk. Many others get you vendor lock in due to proprietary metadata written to the drives. You can't get your data off if the card dies, you have to buy a similar model card sometimes with the same firmware version in order to regain access. Some cards support SNIA's DDF format, in which case there's a chance mdadm can assemble the array, should the hardware card die. Anyway, the main thing is knowing where the land mines are regardless of what technology you pick. If you don't know where they are, you're inevitably going to run into trouble with anything you choose. -- Chris Murphy _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos