Re: make filesystem failed while the capacity of raid5 is big than 16TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/09/12 11:21, GuoZhong Han wrote:
Hi David:

          I am sorry for last mail that I had not described the
requirements of the system very clear.

          I will detail for you to describe the requirements of the system.


(snip)


          As you said, the performance for write of 16*2T raid5 will be
terrible, so what do you think that how many disks to be build to a
raid5 will be more appropriate?

Personally I wouldn't use more than 5 drives in a RAID5 with drives larger than 1TB, the failure risk is too high. With 16x 2TB drives, how about two RAID6 arrays of 8 drives each, then RAID0 them? (RAID60)

Or, two RAID6 arrays with 7 drives each, 2 hotspares, and RAID0 on top. (RAID10 + 2 HSP)

You mention 36 cores. Perhaps you should try the very latest mdadm versions and Linux kernels (perhaps from the MD Linux git tree), and enable the multicore option.

Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux