The usbfs driver manages a list of completed asynchronous URBs. But it is too eager to free the entries on this list: destroy_async() gets called whenever an interface is unbound or a device is removed, and it deallocates the outstanding struct async entries for all URBs on that interface or device. This is wrong; the user program should be able to reap an URB any time after it has completed, regardless of whether or not the interface is still bound or the device is still present. This patch (as1222) moves the code for deallocating the completed list entries from destroy_async() to usbdev_release(). The outstanding entries won't be freed until the user program has closed the device file, thereby eliminating any possibility that the remaining URBs might still be reaped. This fixes a bug in which a program can hang in the USBDEVFS_REAPURB ioctl when the device is unplugged. Signed-off-by: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> Reported-and-tested-by: Martin Poupe <martin.poupe@xxxxxxxx> --- Index: usb-2.6/drivers/usb/core/devio.c =================================================================== --- usb-2.6.orig/drivers/usb/core/devio.c +++ usb-2.6/drivers/usb/core/devio.c @@ -359,11 +359,6 @@ static void destroy_async(struct dev_sta spin_lock_irqsave(&ps->lock, flags); } spin_unlock_irqrestore(&ps->lock, flags); - as = async_getcompleted(ps); - while (as) { - free_async(as); - as = async_getcompleted(ps); - } } static void destroy_async_on_interface(struct dev_state *ps, @@ -644,6 +639,7 @@ static int usbdev_release(struct inode * struct dev_state *ps = file->private_data; struct usb_device *dev = ps->dev; unsigned int ifnum; + struct async *as; usb_lock_device(dev); @@ -662,6 +658,12 @@ static int usbdev_release(struct inode * usb_unlock_device(dev); usb_put_dev(dev); put_pid(ps->disc_pid); + + as = async_getcompleted(ps); + while (as) { + free_async(as); + as = async_getcompleted(ps); + } kfree(ps); return 0; } -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html