Re: increasing stripe_cache_size decreases RAID-6 read throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did some tests, starting with the default values of 256 for
stripe_cache_size and 3072 for read_ahead_kb, and doubling them both
until performance stopped improving. Here are the best results that I
saw:

# echo 2048 > /sys/block/md0/md/stripe_cache_size
# echo 24576 > /sys/block/md0/queue/read_ahead_kb
# iozone -a -y64K -q16M -s4G -e -f iotest -i0 -i1 -i2

                                                    random  random
      KB  reclen   write rewrite    read    reread    read   write
 4194304      64  241087  259892   243478   248102    7745   16161
 4194304     128  259503  261886   244612   247157   13417   26812
 4194304     256  260438  268077   240211   238916   21884   37527
 4194304     512  243511  250004   252507   252276   34694   48868
 4194304    1024  244744  253905   258920   250495   52351   76356
 4194304    2048  240910  250500   253800   265361   79848  100131
 4194304    4096  244283  253516   271940   272117  101737  137386
 4194304    8192  239110  246370   262118   269687  103437  164715
 4194304   16384  240698  249182   239378   253896  119437  198276


250 MB/s reads and writes is quite nice for a 5 drive RAID-6.

But I still do not understand why it is necessary to increase the
stripe_cache_size to 16 full stripes in order to optimize sequential
write speed.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux