On Wed, 08 Dec 2004 17:41:45 -0500, Guy wrote: > I also tried changing /proc/sys/vm/max-readahead. > I tried the default of 31, 0 and 127. All gave me about the same > performance. > > I started testing the speed with the dd command below. It complete in about > 12.9 seconds. None of the read ahead changes seem to affect my speed. > Everything is now set to 0, still 12.9 seconds. > 12.9 seconds = about 79.38 MB/sec. > > time dd if=/dev/md2 of=/dev/null bs=1024k count=1024 I'm running kernel 2.6.8; I found the readahead setting had a pretty dramatic effect. I set readahead for all the drives and their partitions to zero: blockdev --setra 0 /dev/{hdc,hdg,sda,hdc5,hdg5,sda5} I tested various readahead values for the array device by reading 1GB of data from the device using this procedure: blockdev --flushbufs /dev/md1 blockdev --setra $readahead /dev/md1 dd if=/dev/md1 of=/dev/null bs=1024k count=1024 These are the results: RA transfer rate (B/s) --------------- 0: 15768513 128: 33680867 256: 42982770 512: 59223248 1024: 78590551 2048: 81918844 4096: 82386839 We seem to reach the point of diminishing returns at 1024 readahead, ~80MB/sec throughput. To recap, this is with three Seagate Barracuda drives, two of which are 80GB PATA, the other a 120GB SATA, in a RAID5 configuration. 256 was the default readahead value. The chunk size on my array is 32k. I don't know if that has an effect or not. -Steve - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html