On Tue, 8 Mar 2016, David Mosberger-Tang wrote: > From: David Mosberger <davidm@xxxxxxxxxx> > > usb_submit_urb() may take quite long to execute. For example, a > single sg list may have 30 or more entries, possibly leading to that > many calls to DMA-map pages. This can cause interrupt latency of > several hundred micro-seconds. > > Avoid the problem by releasing the io->lock spinlock and re-enabling > interrupts before calling usb_submit_urb(). This opens races with > usb_sg_cancel() and sg_complete(). Handle those races by using > usb_block_urb() to stop URBs from being submitted after > usb_sg_cancel() or sg_complete() with error. > > Note that usb_unlink_urb() is guaranteed to return -ENODEV if > !io->urbs[i]->dev and since the -ENODEV case is already handled, > we don't have to check for !io->urbs[i]->dev explicitly. > > Before this change, reading 512MB from an ext3 filesystem on a USB > memory stick showed a throughput of 12 MB/s with about 500 missed > deadlines. > > With this change, reading the same file gave the same throughput but > only one or two missed deadlines. > > Signed-off-by: David Mosberger <davidm@xxxxxxxxxx> Pretty good. Only one change is really needed. > @@ -515,12 +516,10 @@ void usb_sg_wait(struct usb_sg_request *io) > int retval; > > io->urbs[i]->dev = io->dev; > + spin_unlock_irq(&io->lock); > + > retval = usb_submit_urb(io->urbs[i], GFP_ATOMIC); This should now be GFP_NOIO. > - /* after we submit, let completions or cancellations fire; > - * we handshake using io->status. > - */ > - spin_unlock_irq(&io->lock); > switch (retval) { > /* maybe we retrying will recover */ > case -ENXIO: /* hc didn't queue this one */ > @@ -590,8 +589,8 @@ void usb_sg_cancel(struct usb_sg_request *io) > for (i = 0; i < io->entries; i++) { > int retval; > > - if (!io->urbs[i]->dev) > - continue; > + usb_block_urb(io->urbs[i]); > + > retval = usb_unlink_urb(io->urbs[i]); > if (retval != -EINPROGRESS > && retval != -ENODEV Strictly speaking, this loop should run backward. Then the spin_unlock() could be replaced with spin_unlock_irqrestore(). In fact, the whole routine could be restructured like this: spin_lock_irqsave(&io->lock, flags); /* shut everything down, if it isn't already */ if (io->status) { spin_unlock_irqrestore(&io->lock, flags); return; } io->status = -ECONNRESET; spin_unlock_irqrestore(&io->lock, flags); for (i = io->entries - 1; i >= 0; --i) { ... But that should be done in a separate patch. It's not critical anyway; cancelling I/O is relatively rare and unimportant. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html