In addition to below, I agree with your other email in that the current code can allow the execution of a transfer to be delayed up to (but no more than) 1 uframe (I mentioned this several days ago also). So as far as changes that need to be made, is this a complete list: 1. use best-case calculations in transfer time (currently uses worst-case calculations) 2. ensure all interrupt transfers come after all isoc transfers within a frame 3. change scheduler logic to: a) ensure for each uframe, the start of the last transfer does not get delayed into the next uframe (this will probably replace most of the current logic) b) allow starting any transfer (including large isoc) in non-empty uframe 4. implement rebalancing 5. implement FSTN Did I miss anything? On Thu, Apr 16, 2009 at 4:56 PM, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> wrote: > On Thu, 16 Apr 2009, Dan Streetman wrote: > >> If you don't mind, I'd like to summarize, as I think we only really >> disagree on 2 points: >> >> 1. Can more than 188 bytes worth of SSPLTs be issued in a single uframe? >> 2. Can an Isoc transfer that is larger than 188 bytes be started in a >> uframe that is not empty? >> >> I believe you think no on both those points, while I think yes on both >> points. Note that I think you were confusing my position on #2 with >> the current implementation - the current tt_available() implementation >> agrees with #2 *only* because it's easier to perform the calculations >> that way. > > Our emails are passing in the night. :-) > > You've convinced me on #1. With the additional proviso that in any > four consecutive uframes, no more than 752 bytes of SSPLIT-OUT data > may occur (which the current code doesn't check for!). Well no, you are right. I think that the carryover logic prevents this from happening, but I haven't specifically thought about that case. >> I think all of us (you, me, and David) agree on: >> >> 1. The TT scheduler should be putting all the interrupt transfers >> after the isoc transfers. > > That's certainly the _easiest_ way to do it. On the other hand, with > the current code that would prevent you from accepting _any_ isoc > transfers larger than 188 bytes if there's an interrupt transfer > scheduled in uframe 0. > > Of course, the current code could be improved... > >> 2. It would be good to implement "rebalancing", either for moving a >> single transfer's position later in the same frames, or for moving >> many transfers or moving transfers into a completely new frame >> position, or both. Also FSTN would be nice to add (for EHCI hardware >> that supports it). > > Rebalancing might be more effort than it is worth, at least for now. > FSTN support probably should be added. (And backpointers for siTD's, > if they aren't already in there -- I haven't looked.) > >> There is also the issue of what bandwidth calculation the TT scheduler >> should use. It currently uses the regular low/full-speed calculation >> as specified in 5.11.3, but you disagree that this is appropriate >> because that calculation is worst-case bitstuffing, while 11.18.1 says >> best case (no bitstuffing) should be used (although it then >> contradicts itself by calculating the per-uframe byte capacity using >> worst-case bitstuffing). >> >> I'm not sure whether to use best-case calculations, which we would >> need to create a new function for, or continue to use the worst-case >> regular 5.11.3 function. I tend to lean towards the 5.11.3 function, >> since the 188-byte calculation also seems to use worst-case >> bitstuffing despite what it says in the text. The calculation that >> comes up with the 1157 value in the figure does use bitstuffing. >> Plugging 1157 into the 5.11.3 calculations gives roughly 900ms. Using >> the smallest-time (isoc out) calculation with 1 byte does give 16 >> maximum transfers per uframe as specified later in sec 11.18. So, as >> you pointed out, it seems that although the spec says "best case no >> bitstuffing" it doesn't mean it. > > Let's be more precise about this. Properly handling periodic transfers > requires both _budgeting_ and _scheduling_: > > "Budgeting" means checking that the transfers don't use more > than 90% of the full/low-speed bandwidth or 80% of the > high-speed bandwidth. > > "Scheduling" means deciding which [u]frames the transfers are > assigned to and which uframes the S/CSPLITs are assigned to. > > For budgeting you have to use worst-case figures, i.e., the formulas in > 5.11.3 (which are given in ns and can be converted to us with no > difficulty). Otherwise the guarantees might not be met. > > For scheduling you are supposed to use best-case byte counts; otherwise > you might fail to accept transfers that could be accomodated. > > Obviously the two areas interact, because the budgeting will depend on > which frames and uframes you allocate for the transfers. Still, they > are logically separate notions. > > Does this clarify things? Yes. I think I finally understand sec 11.18.1 and fig 11-66. I didn't understand it before, but as you pointed out in your other email, stopping with only 29 bytes allowed in uframe 6 is nowhere near 90%. What the spec seems to have done here is taken the 90% requirement to apply to the worst-case calculation ("budgeting"), and reversed that to say, ok in the best case, we can schedule for 1157 bytes total per frame. When we do a best-case (no bit-stuffing) scheduling of 1157 bytes, it will probably take a little more than 1157 bytes on the wire due to bit-stuffing, but in the very worst case it will never go above 90% of the frame. So "schedule" for 1157 best-case bytes; "budget" for 90% of the uframe. So the scheduler shouldn't use the formula from sec 5.11.3, but it also needs to account for "interpacket gap" and "bus turnaround time" too, as well as the TT think time, right? Do we know those numbers? For example table 5-4 says a full-speed isoc transfer has 1 byte of "interpacket delay". I am not seeing where that number is coming from....? -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html