On Fri, Oct 14, 2011 at 12:00:27PM -0700, Roland Dreier wrote: > Right, but that explicit sleep really has to go in the long term. The > problem is that if we have a backend that can has high performance, > we can go from queue full to queue empty in way less time than we > sleep for, which means that we stall processing and get way worse > performance than the backend can do. > > I don't know how to fit this into the target stack architecture, but > in vague terms we need to stop processing when the queue is > full, and then restart processing when there's queue space available > (maybe not when the first slot opens up, but say when we have > half the queue available again). The target core already does fairly detailed tracking of the queue depth, and instead of the sleep we should just stop processing tasks until a slot opens up. By splitting the execution context for the tasks and the various per-command thread offloads we get a bit closer to that, but I'm not overly happy with the architecture I have at this points. The problem that this code handles is different, we get a QUEUE FULL that isn't directly related to the queue depth, aka one that is caused by a limitation that isn't purely the number of items in the queue, but e.g. a per pci device limit, or a number of s/g list entry slots. And it's fairly hard to find a good timeout for this. I suspect we should try to copy the approach used in the SCSI initiator midlayer, that is dynamically decrease (and later increase again) the queue depth based on those events. scsi_handle_queue_full and scsi_track_queue_full are the core routines dealing with it on that side. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html