On Thu, 11 Oct 2007 11:01:39 +1000, David Chinner wrote: > Latencies are an order of magnitude lower at 60-70ms because the disks > have less deep queues. This is expected - deep queues and multiple > outstanding I/Os are the enemy of single I/O latency.... > > If I remount with barriers enabled, the latency at nr_requests=128 > goes up to a consistent 2.2s. Not surprising, we're flushing the drive > cache very regularly now and it points to the create or truncate > transaction having to pushing log buffers to disk. The latency remains > at 70-80ms at nr_requests=4. Thanks for the info. I did try fiddling nr_requests but I made it bigger. I'll try with it lower. > > It seems this problem was introduced between 2.6.18 and 2.6.19. > > When the new SATA driver infrastructure was introduced. Do you have > NCQ enabled on more recent kernels and not on 2.6.18? If so, try > disabling it and see if the problem goes away.... Unfortunately the drives in the file server don't support NCQ. Not sure if it's supported in the machine I was testing on (it's certainly a few years old). > > The other thing I've found is that if I do the dd to an ext3 fs (on > > the same disk at least) while running the test in the XFS fs then I > > also see the latencies. > > So it's almost certainly pointing at an elevator or driver change, not > an XFS change. OK, though it doesn't seem to effect ext3. I'm going to run a git bisect to see what it comes up with. > Cheers, > > dave. Cheers, Andrew - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html