On Wed, Sep 28, 2011 at 12:05:10PM -0700, Eric Seppanen wrote: > I agree: queue lock is the worst performance killer when hw can do > >100K IOPS per block device. > > Rather than just being chased away from the request queue due to > performance issues, I could argue there's very little point to having > a queue for devices that > (a) have no seek penalty (and always use noop elevator) > (b) have hardware queues at least as deep as the default request queue > (c) don't benefit from merging > > (c) is maybe debatable, but if a device can saturate its bus bandwidth > on 4KB IO, the latency is probably not worth it. In theory, yes. But at some point we will be able to saturate them, and then people want proportional I/O, light amounts of queueing, etc. And I really don't want to reinvent that for every little device. The other problem is that we a single driver might driver totally different types of devices, already today we have iSCSI or FC accessibly high IOPS devices, there are good PCI-e flash devices masquerading as AHCI, and my current problem is that the queue_lock really hurts me in virtio-blk when using a PCIe flash device underneath. So we really need some infrastructure that allows a generic interface to the driver, and allow us to plug in merging, scheduling, queueing on an as needed basis. That is my long term plan - making request_lock suck a little less, and improving the driver interface is a good first step, though. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html