On Fri, Jan 08, 2010 at 11:25:08AM -0500, Thomas Harold wrote: > On 1/7/2010 12:28 PM, Joseph L. Casale wrote: > >>> I also heard that disks above 1TB might have reliability issues. > >>> Maybe it changed since then... > >>> > >> > >> I remember rumors about the early 2TB Seagates. > >> > >> Personally, I won't RAID SATA drives over 500GB unless they're > >> enterprise-level ones with the limits on how long before the drive > >> reports a problem back to the host when it has a read error. > >> > >> Which should also take care of the reliability issue to a large degree. > > > > An often overlooked issue is the rebuild time with Linux software raid and > > all hw raid controllers I have seen. On large drives the times are so long > > as a result of the sheer size, if the array is degraded you are exposed during > > the rebuild. ZFS's resilver has this addressed as good as you can by only copying > > actual data. > > > > With this in mind, it's wise to consider how you develop the redundancy into > > the solution... > > Yah, RAID-5 is a bad idea anymore with the large drive sizes. RAID-6 or > RAID-10 is a far better choice. > > I prefer RAID-10 because the rebuild time is based on the size of a > drive pair, not the entire array. I have mixed feelings on RAID10... I like the extra speed it gives (especially more IOPS with ZFS), but at the same time, if you lose one drive, you're then _one_ drive failure away from losing your entire array. Of course, you'd have to be unlucky and have that drive failure occur on the other member of the mirror set that already suffered a failure.... Maybe doing three drives per RAID1 set would make me feel better (but waste a lot of space) :) This is a good read: http://queue.acm.org/detail.cfm?id=1670144 Ray _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos