On Apr 12, 2011, at 12:31 AM, John R Pierce wrote: > On 04/12/11 12:23 AM, Matthew Feinberg wrote: >> Hello All >> >> I have a brand spanking new 40TB Hardware Raid6 array > > never mind file systems... is that one raid set? do you have any > idea > how LONG rebuilding that is going to take when there are any drive > hiccups? or how painfully slow writes will be until its rebuilt? is > that something like 22 x 2TB or 16 x 3TB? I'll bet a raid rebuild > takes nearly a WEEK, maybe even longer.. > > I am very strongly NOT in favor of raid6, even for nearline bulk > backup > storage. I would sacrifice the space and format that as raid10, and > have at LEAST a couple hot spares too. +1 for the 1+0 and a few hot spares. Raid 6 + spare ran great but rebuilds took 2 days. The likely hood of 2+ failed drives is less then 1 failed drive but I actually had 2 failed drives so RAID6 + spare saved me. Hence why I switched to RAID 1+0 + spares. A tuned XFS fs will work great. I run my large RAID XFS fs with logbufs=8, and no(atime.dirtime). I also run iozone for testing my tuned options for optimum performance in my env. - aurf _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos