Am Dienstag, 11. August 2009 03:51:04 schrieb Elina Pasheva: > If you look at the sierra_resume() function in the 'while' loop where it > processes the delayed queue and further down right after the 'for' loop: > Between the marks "----->" (i.e. between spin_unlock_irq() and the > next spin_lock_irg() setting the "suspend" flag, there is a window of > opportunity for a sierra_write() to be called and because the suspend > flag still indicates "suspended" (see ******* in sierra_write()) these > urbs will be kept on the delayed queue. But right after that the > "suspend" status is set to 0 by the resume() function so in reality the > urbs in the delayed queue will not be processed until the next > (auto)suspend condition occurs or may be lost if the port is closed. > > This window of opportunity may be used for instance, in a dual-core > architecture by the other CPU to call write() function, etc. > This condition exist for the option driver as well. Hi, sorry for the delay, I suffered some major hardware problems. The race you describe is due to incorrect locking. The locking needs to be changed because the flag cannot be cleared earlier without changing the order of writes. It seems to me that the correct version looks like: portdata = usb_get_serial_port_data(port); spin_lock_irq(&intfdata->susp_lock); while ((urb = usb_get_from_anchor(&portdata->delayed))) { usb_anchor_urb(urb, &portdata->active); intfdata->in_flight++; spin_unlock_irq(&intfdata->susp_lock); err = usb_submit_urb(urb, GFP_NOIO); if (err < 0) { spin_lock_irq(&intfdata->susp_lock); intfdata->in_flight--; spin_unlock_irq(&intfdata->susp_lock); usb_unanchor_urb(urb); usb_scuttle_anchored_urbs(&portdata->delayed); spin_lock_irq(&intfdata->susp_lock); break; } spin_lock_irq(&intfdata->susp_lock); } intfdata->suspended = 0; if (portdata->opened) { err = sierra_submit_rx_urbs(port, GFP_ATOMIC); if (err) ec++; } spin_unlock_irq(&intfdata->susp_lock); Regards Oliver -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html