Hello, I wrote:
Looks like this patch indeed needs rework or even replacement...
The urb_dequeue() method "forgets" to unlink 'struct musb_qh' from the control/
bulk schedule list when an URB being cancelled is the only one queued to its
endpoint, which will cause musb_advance_schedule() to "block" once it reaches
'struct musb_qh' with now empty URB list, and so URBs queued for other control/
bulk endpoints after the one being dequeued will not be served. Hence add code
to unlink and free 'struct musb_qh' if its URB list emptied to this method and
remove now useless check from musb_advance_schedule()...
Signed-off-by: Sergei Shtylyov <sshtylyov@xxxxxxxxxxxxx>
---
This patch replaces a part of the patch originallly posted by Ajay Kumar Gupta
(http://marc.info/?l=linux-usb&m=122284678326862) that however dealt with the
issue incorrectly...
drivers/usb/musb/musb_host.c | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)
Index: mainline/drivers/usb/musb/musb_host.c
===================================================================
--- mainline.orig/drivers/usb/musb/musb_host.c
+++ mainline/drivers/usb/musb/musb_host.c
@@ -432,7 +432,7 @@ musb_advance_schedule(struct musb *musb,
else
qh = musb_giveback(qh, urb, urb->status);
- if (qh && qh->is_ready && !list_empty(&qh->hep->urb_list)) {
+ if (qh != NULL && qh->is_ready) {
DBG(4, "... next ep%d %cX urb %p\n",
hw_ep->epnum, is_in ? 'R' : 'T',
next_urb(qh));
@@ -2136,7 +2136,12 @@ static int musb_urb_dequeue(struct usb_h
ret = 0;
qh->is_ready = 0;
__musb_giveback(musb, urb, 0);
- qh->is_ready = ready;
+ if (list_empty(&qh->hep->urb_list)) {
+ qh->hep->hcpriv = NULL;
+ list_del(&qh->ring);
+ kfree(qh);
Oops, this also needs to invalidate hw_ep->{in|out}_qh in case we got here
because of !qh->is_ready. What a mess... :-/
+ } else
+ qh->is_ready = ready;
} else
ret = musb_cleanup_urb(urb, qh, urb->pipe & USB_DIR_IN);
done:
I've dug into an issue I've been having when running aplay (starting DMA
on busy channel at end of playback) and it turned out that this patch
uncovered a problem with the current approach of keeping qh allocated only
while its corresponding URB list is not empty (which the patch only tried to
do consistently). The things go wrong when such scenario takes place:
1) musb_host_tx() -> musb_advance_schedule() -> musb_giveback();
2) musb_giveback() -> __musb_giveback() -> __musb_giveback() ->
usb_hcd_giveback_urb();
3) usb_hcd_giveback_urb() calls audio driver;
4) audio driver -> usb_unlink_urb() -> musb_urb_dequeue() -> musb_giveback();
5) steps (2) to (4) repeat until URB list empties, then musb_urb_dequeue()
calls kfree();
6) audio driver (still from the 1st callback) calls usb_submit_urb() ->
musb_urb_enqueue();
7) musb_urb_enqueue() re-allocates qh and sets its is_ready field (but it
shouldn't be as we're still in the callback!) and calls musb_schedule() ->
8) musb_start_urb() -> musb_ep_program() -> DMA channel_program() method and
then cppi_host_txdma_start();
9) first driver callback finally returns to __musb_giveback() and then
musb_giveback() which dereferences already freed qh (actually the new
allocation returns the same address), sees that URB list is not empty
and returns that qh to musb_advance_schedule();
10) musb_advance_schedule() sees non-NULL qh with is_ready set and starts the
first URB from the list *again*... voila!
Therefore it seems that instead of following the current approach
consistently, we need to fix that approach to keep qh around until the
corrsponding endpoint is disabled which turns out to be not only the speed
improvement but the Right Thing to do (RM).
Besides, the scheduling algorithm implemented in musb_giveback() is
unfair, so no wonder TI pushed the patch that tries to avoid queueing bulk
UBRs to the dedicated endpoint which seems only a palliative measure to me. We
should instead cycle thru the qh ring and execute next_urb(qh). Well, maybe
I'll cook up something... I sohuld've noticed that way earlier. :-/
WBR, Sergei
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html