calculating optimal chunk size for Linux software-RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am I correct that optimal chunk size is usually the size of the
average file read/written to disk divided by number of block devices
in RAID array storing the data? For example if the average file size
is 1024KiB and I have four disks in RAID1, then I should choose the
chunk size around 256KiB to get the optimal read performance? Or if I
have two drives in RAID0, then I should choose the chunk size 512KiB
instead? Or are there better methods/benchmarks to determine the
optimal chunk size for software-RAID? Last but not least, is there a
good utility which could help one to measure the average I/O
read/write size?


regards,
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux