Re: Changes to ehci-hcd periodic transfer code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 5 Jun 2009, Dan Streetman wrote:

> On Wed, Jun 3, 2009 at 12:33 PM, Alan Stern<stern@xxxxxxxxxxxxxxxxxxx> wrote:
> > Dave and Dan:
> >
> > Having read through most of ehci-sched.c, I've got some proposals for
> > improvements.  First I'd like to run them by you, to see if you don't
> > like any of them or have suggestions for better approaches.
> >
> >
> >        1. Periodic bandwidth accounting is wrong.
> >
> > The periodic_usecs() and periodic_tt_usecs() routines both work by
> > counting the allocations for each QH, iTD, and siTD for the frame in
> > question, thereby arriving at the total number microseconds in use by
> > periodic transfers during the frame in question.
> >
> > But "in use" != "allocated"!  An endpoint may have bandwidth allocated
> > during a frame without having any transfers queued during that frame.
> > For example, suppose a period-1 Iso endpoint has 10 iTDs queued.  Its
> > bandwidth should still be allocated for _all_ frames, even though the
> > time is in use for only 10 of them.
> 
> This seems to be only for iTDs/siTDs right?  Since a QH should be
> scheduled in all frames for its period.

Right.

> > To solve this, I'll create an array of N*8 bytes to store the number of
> > us allocated for periodic transfers during each uframe for the N frames
> > in the periodic schedule.  This has the advantage of allowing bandwidth
> > to remain allocated even when an endpoint isn't in use, should we
> > decide to allow such a thing.
> 
> Yep, that makes sense.  When a QH or iso_stream is scheduled for the
> first time, add the new allocated bandwidth to the array in all the
> frames for the QH/iso_stream's period.  When the QH/iso_stream is
> removed, subtract the bandwith...that makes checking the currently
> reserved bandwidth easy, instead of walking the tree to check each QH
> or TD scheduled.

That's the idea, you've got it.

A possible space-optimization is to allocate something smaller than
N*8, such as 32*8.  It reduces the size of the array, but it also means
that any transfer with a period higher than 32 frames would have to be
charged for bandwidth as though the period were 32.  Since these
high-period transfers generally don't take up a significant portion of
the total bandwidth anyway, I don't think this will matter much.

> > To do the same sort of thing for FS/LS periodic transfers, I'll
> > allocate a data structure for each active TT.  It may contain a similar
> > array to account for allocated bandwidth; it certainly will hold a
> > list_head for all the QHs and iso_streams going through the TT.
> 
> Sounds good to me.


> >        4. Rewrite the FS/LS scheduler.
> >
> > In the end, this comes down to a single subroutine to calculate in
> > which uframes a transfer can be scheduled to start.  The computation
> > isn't hard but it is a little messy.  Once it's done, all that remains
> > is a strategy for choosing which uframe to use.  For instance, it might
> > be better to bias Isoc transfers towards earlier uframes and Interrupt
> > transfers towards later uframes.
> 
> Yep...messy.
> 
> Also, are you planning to differentiate between bandwidth allocation,
> which should use worst-case timing, and scheduling, which should use
> best-case timing?

Yes.

>  I believe the current functions to calculate
> bandwidth and scheduling, which are taken from the spec section
> 5.11.3, use worst-case timing (i.e. max bit stuffing).  However, as
> you pointed out before the actual scheduling of transations should use
> best-case timing.  New function(s) would need to be created to
> calculate the best-case timings.

Yep.  They're pretty easy, compared to the worst-case formulas.

> Is there a need for two different arrays for each TT to track both
> best-case timing, to use in choosing when to schedule transfers, and
> to track worst-case timing, to use in verifying the various max use
> rules such as 80% frame use?

The best-case timing can't easily be tracked in this way, since it
isn't just a "total number of microseconds" sort of thing; it depends 
on exactly when in each uframe a transfer is scheduled to start.  The
calculation will have to be done by walking the list of transfers
currently scheduled through the TT for each frame.

This will make more sense when you see the code.  I hope...

> Also, I think David suggested moving all the TT scheduling code into a
> common location, so other non-EHCI highspeed Host Controller drivers
> could use it...are you planning on moving it out into a common area?

The subroutine I have in mind will be pretty generic, but I'm not 
planning on putting it in any sort of common area.  If it turns out to 
be useful for other controller drivers, that can be done later.

> For that matter, maybe it would be worthwhile to move all the
> bandwidth code into hcd.c or some other common area?  Tracking
> bandwidth really isn't (or shouldn't be) HCD-specific, right?

I don't know.  A simple array of time values hardly needs to be made 
common.  Even the routines for checking, adding, and removing values 
are short enough that it won't hurt to rewrite them for each new 
controller.

Besides, most of the controller drivers don't support high speed, so
they wouldn't use the same organization as ehci-hcd (since they don't
need to track microframes).

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux