Re: [RFC 0/9] xhci: Intel SW bandwidth checking.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 08, 2011 at 06:01:27PM -0400, Alan Stern wrote:
> On Fri, 8 Jul 2011, Sarah Sharp wrote:
> 
> > > By the way, you went to some effort earlier to make sure that the same 
> > > bandwidth mutex would be used for both buses on an xHCI controller.  Is 
> > > that still necessary?  Since the buses are physically separate (they 
> > > use different wires in the USB cable), I would expect that the 
> > > scheduling and bandwidth calculations could be independent as well.
> > 
> > Yes, it's still necessary, because there are global resources shared
> > between both buses.  In the case of the Intel xHCI host, it has a global
> > limit on the number of endpoints, so we have to only allow one bandwidth
> > change at a time.  Adding or changing endpoints in either bus can also
> > impact the direct memory interface bandwidth to the host (although I
> > haven't gotten around to adding in bandwidth checking for that yet).  I
> > would suspect that other host controllers have other global resources
> > that require the bandwidth mutex to be shared between both xHCI
> > roothubs.  So we can't allow bandwidth changes on both buses at the same
> > time.
> 
> But what if scheduling changes were atomic, as proposed in the email I 
> sent a few minutes ago?

I don't really understand how your proposal makes bandwidth changes
atomic.  Let me see if I can paraphrase your proposal so I can make sure
I'm not confused.

You want to store the arrays of endpoints to be added and removed in the
USB core somewhere, so that you can recover when a device rejects an
alternate interface setting (or the xHCI host doesn't have enough
bandwidth for the new alt setting).  Aside from wanting to pass the
endpoints as arrays to the xHCI driver instead of individually, it seems
like you want to use the same basic mechanism for making the xHCI driver
change the bandwidth allocation.

> Since xHCI controllers have only one command 
> ring, you wouldn't need a mutex to prevent bandwidth changes on both 
> buses at the same time.

But multiple commands can be submitted to the command ring at a time.
The xHCI driver has to drop the xhci->lock to wait on the command to
finish, so there's nothing to stop a second bandwidth change from
being put on the command ring before the first one finishes.

Even with the Intel bandwidth checking patches, I still have to deal
with other host controllers that do proper hardware bandwidth checking.
They may have resource constraints that cause them to reject bandwidth
changes, even when the endpoints would technically "fit" in the
schedule.

So unless the USB core guarantees that the bandwidth mutex is held
across bandwidth change, I don't think we can avoid this type of race
condition:

 1. Driver A tries to switch device A to a less-bandwidth intensive
    alternate interface setting, and the xHCI driver issues a Configure
    Endpoint command.  The xHCI driver drops the xhci->lock to wait on a
    completion for the command to be processed.

 2. Driver B tries to switch device B to a more-bandwidth intensive
    alt setting.  The xhci->lock is acquired and a second Configure
    Endpoint command is queued.

 3. The first Configure Endpoint command succeeds, but device A
    stalls the Set Interface request (or perhaps the xHCI host
    controller rejects the alt setting because of some other internal
    resource constraint, like the number of endpoints the host can
    handle).  In either case, we need to reinstall the old alt setting.

 4. The second Configure Endpoint command succeeds, and now device B
    is taking all the bus bandwidth.  We can't install device A's old
    alt setting, and can't install the new alt setting, so the device is
    basically useless.

That's why we need the USB core to hold the bandwidth mutex across both
the call to usb_hcd_alloc_bandwidth() and issuing the Set Interface
command to the device.  We need to prevent any changes to the host
controller bandwidth and internal resources until we have fully
installed or finished cleaning up a failed bandwidth change.

> > > It's not too early to start thinking about how we can allow drivers to
> > > request smaller-than-the-max bandwidth usage.  It seems to me that in
> > > the usb_host_endpoint structure, we should add a max_bytes_per_period
> > > field.  Normally this would be the same as the maxpacket value
> > > (adjusted for multiplicity and bursting), but drivers could ask to make
> > > it smaller.
> > 
> > Hmm, yes, that could work.  But I really need to know the max packet
> > size and the burst/mult separately for the bandwidth algorithm.
> 
> Yes, for calculating packet overheads.  Those numbers would still be 
> available in the descriptors.
> 
> >  Maybe
> > we also need a field for polling interval, since some devices advertise
> > the wrong value, and some drivers actually do want to poll more often?
> 
> That could be added too.
> 
> > > Actually, we'd need two copies of this field: One for the current value 
> > > and one for the new value requested by the driver.  The current value 
> > > would be set equal to the new value at the next usb_set_interface call.
> > > There wouldn't be any way to adjust the new value prior to a 
> > > usb_set_configuration call, but since drivers can't make those calls 
> > > directly, this shouldn't matter too much.
> > 
> > Is there any reason a driver would submit an URB and then request a new
> > alt setting?
> 
> Yes.  Imagine a video device with several different possible 
> resolutions, implemented using different altsettings.

Yes, but why wouldn't the driver cancel the URB that was submitted to
the old endpoint before changing the alt setting and possibly changing
the endpoint characteristics?  The xHCI driver is going to change
endpoint rings when an endpoint is changed, so the driver really *needs*
to cancel that URB.  It's the same sort of issue we had with drivers not
canceling URBs before issuing a device reset.  I don't really care which
way you design the new interface, but I was concerned about this issue
in particular.

> >  If not, why not just have the USB core write over the
> > endpoint descriptors when the driver asks for the new value?  If we do
> > decide to allow drivers to modify a new field in the usb_host_endpoint
> > structure, does it matter that there's a disconnect between what we're
> > actually using and what a userspace program like lsusb sees?
> 
> I guess that could be made to work, if we disable the endpoint, then 
> make the change, and then enable it.  Although I wouldn't want to write 
> over the wBytesPerInterval value in the descriptor; instead we should 
> add a new field for this.

Sarah Sharp
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux