Jens Axboe [mailto:axboe@xxxxxxx] writes: > You say io, but I guess you mean writes in particular? Read or writes. One of the test cases was: dd if=/dev/sda of=/dev/null bs=512b would break apart into 64 4K reads with no completion dependencies between them. > Or for any substantial amount of io, you would be queueing it so fast > that it should have plenty of time to be merged > until the drive sucks them in. Did I mention that this problem started occurring when we increased the aacraid adapter and driver performance last year? We managed to suck the requests in faster. Sadly (from the perspective of Adaptec pride in our hardware controllers ;-> ), the scsi_merge layer is more efficient at coalescing the requests than the adapter's Firmware solely because of the PCI bus bandwidth used. I must admit that the last time I did this instrumented test was in the 2.6.3 timeframe with SL9.1. This 'plugging' you are talking about, when did it make it into the scsi layer? Sounds like I need to retest, certainly a good result of opening my mouth to start this thread. > And a few ms should be enough time to queue that amount many many times over. The adapter can suck in 256 requests within a single ms. Sincerely -- Mark Salyzyn - : send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html