Re: If your using large Sata drives in raid 5/6 ....

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Freemyer <greg.freemyer@xxxxxxxxx> writes:

> All,
>
> I think the below is accurate, but please cmiiw or misunderstand.
>
> ===
> If your using normal big drives (1TB, etc.) in a raid-5 array, the
> general consensus of this list is that it is a bad idea.  The reason being
> that the sector error rate for a bad sector has not changed with
> increasing density.
>
> So in the days of 1GB drives, the likelihood of a undetected /
> repaired bad sector was actually pretty low for the drive as whole.
> But for today's 1TB drives, the odds are 1000x worse.  ie. 1000x more
> sectors with the same basic failure rate per sector.

I just had such a case yesterday. It does happen too often. Esspecialy
as drives get older. Rebuilding a raid5 becomes more and more dangerous
as you increase drive capacity.

> So a raid-5 composed of 1TB drives is 1000x more likely to be unable
> to rebuild itself after a drive failure than a raid-5 built from 1 GB
> drives of yesteryear.  Thus the current recommendation is to use raid
> 6 with high density drives.

If you can spare the drive and cpu then raid6 is definetly preferable.

Although I think the only future is in combining the raid and filesystem
into one. If you have some corrupted blocks then the FS can tell you
which files are corrupted. Raid only really covers the case of a whole
drive failing well.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux