On Mon, Jun 30, 2014 at 09:11:11AM +0200, Andrzej Pietrasiewicz wrote: > Hello Peter, > > W dniu 30.06.2014 06:15, Peter Chen pisze: > >We need to delete un-finished td from current request's td list > >at ep_dequeue API, otherwise, this non-user td will be remained > >at td list before this request is freed. So if we do ep_queue-> > >ep_dequeue->ep_queue sequence, when the complete interrupt for > >the second ep_queue comes, we search td list for this request, > >the first td (added by the first ep_queue) will be handled, and > >its status is still active, so we will consider the this transfer > >still not be completed, but in fact, it has completed. It causes > >the peripheral side considers it never receives current data for > >this transfer. > > > > <snip> > > >+ struct td_node *node, *tmpnode; > > > > if (ep == NULL || req == NULL || hwreq->req.status != -EALREADY || > > hwep->ep.desc == NULL || list_empty(&hwreq->queue) || > >@@ -1331,6 +1332,13 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req) > > > > hw_ep_flush(hwep->ci, hwep->num, hwep->dir); > > > >+ list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { > >+ dma_pool_free(hwep->td_pool, node->ptr, node->dma); > >+ list_del_init(&node->td); > >+ node->ptr = NULL; > > Why bother initializing node->td's prev and next members and assigning NULL > to ptrif... > > >+ kfree(node); > > ... in the very next line the whole structure is freed? > > I would suggest > > + list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) { > + dma_pool_free(hwep->td_pool, node->ptr, node->dma); > + list_del(&node->td); > + kfree(node); > + } Thanks, AP. Yes, it seems it is enough, I just copy it from the ep_free_request. I don't know why Michael did it before, Michael, can you recall it? Besides, I see some code use list_del_init, why we need to re-init it after we delete it from the list. -- Best Regards, Peter Chen -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html