Re: RAID5 created by 8 disks works with xfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/04/12 07:40, Stan Hoeppner wrote:
On 4/1/2012 12:12 AM, daobang wang wrote:
Thank you very much!
I got it, so we can remove the Volume Group and Logical Volume to save resource.
And i will try RAID5 with 16 disks to write 96 total streams again.

Why do you keep insisting on RAID5?!?!  It is not suitable for your
workload.  It sucks Monday through Saturday and twice on Sunday for this
workload.


My thoughts on this setup are that RAID5 (or RAID6) is a poor choice - it can quickly cause a mess for streaming writes. Even if the streams themselves can mostly end up as full stripe writes, odd writes such as the end of a file, metadata writes, log data writes, etc., will mean read-modify-write operations that will cripple the write performance for the rest of the operations.

And if a disk fails so that you are running degraded, it will be hopeless.

Either drop the redundancy requirement entirely (maybe by making sure other backups are in order), or double the spindles and use RAID1 / RAID10.


For an application like this, it would probably make sense to put the xfs log (and the mdraid bitmap file, if you are using one) on a separate disk - perhaps a small SSD.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux