On Fri, 8 Jul 2011, Sarah Sharp wrote: > > By the way, you went to some effort earlier to make sure that the same > > bandwidth mutex would be used for both buses on an xHCI controller. Is > > that still necessary? Since the buses are physically separate (they > > use different wires in the USB cable), I would expect that the > > scheduling and bandwidth calculations could be independent as well. > > Yes, it's still necessary, because there are global resources shared > between both buses. In the case of the Intel xHCI host, it has a global > limit on the number of endpoints, so we have to only allow one bandwidth > change at a time. Adding or changing endpoints in either bus can also > impact the direct memory interface bandwidth to the host (although I > haven't gotten around to adding in bandwidth checking for that yet). I > would suspect that other host controllers have other global resources > that require the bandwidth mutex to be shared between both xHCI > roothubs. So we can't allow bandwidth changes on both buses at the same > time. But what if scheduling changes were atomic, as proposed in the email I sent a few minutes ago? Since xHCI controllers have only one command ring, you wouldn't need a mutex to prevent bandwidth changes on both buses at the same time. > > It's not too early to start thinking about how we can allow drivers to > > request smaller-than-the-max bandwidth usage. It seems to me that in > > the usb_host_endpoint structure, we should add a max_bytes_per_period > > field. Normally this would be the same as the maxpacket value > > (adjusted for multiplicity and bursting), but drivers could ask to make > > it smaller. > > Hmm, yes, that could work. But I really need to know the max packet > size and the burst/mult separately for the bandwidth algorithm. Yes, for calculating packet overheads. Those numbers would still be available in the descriptors. > Maybe > we also need a field for polling interval, since some devices advertise > the wrong value, and some drivers actually do want to poll more often? That could be added too. > > Actually, we'd need two copies of this field: One for the current value > > and one for the new value requested by the driver. The current value > > would be set equal to the new value at the next usb_set_interface call. > > There wouldn't be any way to adjust the new value prior to a > > usb_set_configuration call, but since drivers can't make those calls > > directly, this shouldn't matter too much. > > Is there any reason a driver would submit an URB and then request a new > alt setting? Yes. Imagine a video device with several different possible resolutions, implemented using different altsettings. > If not, why not just have the USB core write over the > endpoint descriptors when the driver asks for the new value? If we do > decide to allow drivers to modify a new field in the usb_host_endpoint > structure, does it matter that there's a disconnect between what we're > actually using and what a userspace program like lsusb sees? I guess that could be made to work, if we disable the endpoint, then make the change, and then enable it. Although I wouldn't want to write over the wBytesPerInterval value in the descriptor; instead we should add a new field for this. > > This mechanism can be added later, but I think it would be a good idea > > to add a field like this now so that we can use it in the upcoming > > bandwidth allocation and scheduling changes. What do you think? > > I think this could be done in the call to the xHCI driver to add the > endpoint. It would be as simple as setting the max packet size in the > input context from the usb_host_endpoint instead of using the endpoint > descriptors. Yes, it should be simple to implement for xHCI. > But let's get the basic bandwidth support in the xHCI driver before we > go optimize it for these special cases. Not a big deal... I just wanted to start the discussion so that we'd have some idea of what was going to be added. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html