Setting NR_STRIPES high increases RAID5 performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

When testing a soft RAID5 (8 SATA drives, Marvell chip) with iometer I noticed
that the read performance did drop much less when adding randomness to the drive
access as the write performance (read 80MB/s->40MB/s, write 70MB/s->9MB/s). Such a
difference I can't explain by the read-compute-write of the parity stripe.
I found that increasing NR_STRIPES from 256 to (8*1024) did increase the throughput
for write from 9MB/s to 35 MB/s. This is still less as I would expect from 8
SATA disks in parallel. Could it be the raid5d thread is not waken up enough
or has not enough priority? Anybody knows any other #defines that could help
here?

	Bart
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux