By fiddling about today I just found that changing /sys/block/sda/queue/nr_requests from 128 to something above the queue depth of the 3ware controller (256 doesn't work, 384 and up do) also fixes the problem.
OK, I have run three sets of bonnie++ tests on my system. This was using the 2.6-bk kernel as of about midnight GMT of 2004-02-17.
System is single P4HT 2.8GHz (SMP kernel), 1GB RAM (4GB highmem enabled), Intel 865G chipset, 3ware 8506-8 with six Seagate 160GB Barracuda SATA disks in a single RAID-5. bonnie++ tests run on an XFS filesystem (made with default mkfs.xfs parameters using a recent xfstools package) over a 200GB LVM2 volume (dm-linear).
bonnie++ -r 512 -s 40960 -f -b
Anticipatory scheduler, nr_requests=128 (default)
Seq. Write 53772 Rand. Write 15557 Seq. Read 47730 Rand. Seek 146.5
Anticipatory scheduler, nr_requests=384
Seq. Write 53843 Rand. Write 17595 Seq. Read 44663 Rand. Seek 143.1
Deadline scheduler, nr_requests=384
Seq. Write 54973 Rand. Write 18897 Seq. Read 41476 Rand. Seek 227.3 (!)
Miquel's suggestion has a definite positive effect, except on sequential reads. This system still doesn't produce anywhere near the throughput I'd expect though, and still doesn't come close to 2.4 numbers either (but those are somewhat bogus, as they rely on setting extremely large readahead settings). There may be some sort of hardware problem that is limiting me to ~60MB/s, although with 2.4 I was able to get sequential read numbers around 95MB/s, so I think the hardware is OK.
_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/