On Fri, 11 Oct 2013, Greg KH wrote: > On Fri, Oct 11, 2013 at 11:29:29AM -0400, Alan Stern wrote: > > This patch continues the scheduling changes in ehci-hcd by adding a > > table to store the bandwidth allocation below each TT. This will > > speed up the scheduling code, as it will no longer need to read > > through the entire schedule to compute the bandwidth currently in use. > > > > Properly speaking, the FS/LS budget calculations should be done in > > terms of full-speed bytes per microframe, as described in the USB-2 > > spec. However the driver currently uses microseconds per microframe, > > and the scheduling code isn't robust enough at this point to change > > over. For the time being, we leave the calculations as they are. > > > > Signed-off-by: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> > > When I apply this to my tree, there is one line of fuzz in ehci-sched.c, which > makes me worried we are out of sync somehow with our trees. No, the fuzz is my fault. I changed an earlier patch and then didn't notice the fuzz message when re-applying the later patch. > The diff between your patch, and if I was to resolve the fuzz and recreate the > patch is: > > @@ -619,8 +501,8 @@ > + got_it: > qh->ps.phase = (qh->ps.period ? ehci->random_frame & > (qh->ps.period - 1) : 0); > - qh->ps.bw_phase = qh->ps.phase & (qh->ps.bw_uperiod - 1); > -@@ -1232,6 +1336,8 @@ static void reserve_release_iso_bandwidt > + qh->ps.bw_phase = qh->ps.phase & (qh->ps.bw_period - 1); > > > How about I stop here, and you check my tree to verify I didn't mess > anything up? I've applied the first 9 in this series. Looks okay so far. I'll resend the 10th patch. The de-fuzzed version differs by only one character from the original. :-) The 11th patch isn't affected. > Oh, and thanks so much for doing this work, it looks great, and > hopefully will resolve the issues that people have run into in the past. There's still a long way to go... Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html