If you cancel an isoc urb that has multiple TD (I suspect this is most isoc urb) then (if I'm reading the code correctly) the urb completion routine is called for each TD, and the memory for each td_list entry is freed separately (even though they are allocated by a single kmalloc()). Neither of these is a good idea! In addition the code that handles the 'td_list' (a per ring linked list of transfer descriptors, usually 1 per URB, but more for isoc transfers) and the 'cancelled_td_list' (of the same items but per endpoint) is horribly complex and sometimes fails to find the required item. Rather than desperately trying to find the code paths where it all goes wrong - making the code even more complex and less robust. I propose the following: Add a soft_trb[] array to xhci_segment mirroring the hardware one. For transfer rings this would contain a pointer to the urb, the array index (so the xhci_segment can be found), and some flags (last_td, last_trb, cancelled etc). The urb->hcpriv pointer would point to the first soft_trb[] for the transfer. Some back pointers through to the ring, ep etc might help as well. Normal command completion can then easily find the urb and work out whether it has completed. As the deq pointer is advanced, the soft_trb[] are invalidated. To cancel a urb the soft_trb's can be scanned and marked 'cancelled', and then the ring/endpoint stopped. When the ring is stopped it is most likely that all of the transfers can been cancelled, so just scan from the deq to enq pointers converting cancelled TRB to NOPs and completing urbs. It is probably worth noting whether the ring has any cancelled trb - especially if streams are in use. Some extra notes: - The command ring could use the soft_trb[] for handling timeouts and completions. - The event ring doesn't need it. - The ring segment size needs to be a few less than 256 so that the xhci_segment structure insn't a silly size (eg 4k+32). Any thoughts? David -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html