On Tue, Jun 07, 2011 at 10:53:25AM -0400, Alan Stern wrote: > On Mon, 6 Jun 2011, Sarah Sharp wrote: > > > > > succeeds. The bandwidth checking needs to be done before the USB core > > > > issues that control transfer. > > > > > > Is this requirement in the xHCI spec? I don't recall seeing it (but > > > then I haven't read the whole thing). > > > > Yes, it is a requirement of the xHCI spec. It's really that they want > > to allow virtualization hosts the opportunity to deny guests the chance > > to switch to certain configurations or alternate interface settings. I > > suppose if two guests could share different interfaces of the same > > device, then you wouldn't want one guest to suddenly switch > > configurations. > > I don't see the connection. Preventing a guest from switching configs > or altsettings doesn't have anything to do with whether the bandwidth > checking is done before or after the control transfer. The command to change the bandwidth (Configure Endpoint) could be used by the host OS to snoop what config or alt setting the guest is trying to use. So if the host notices that guest 1 is trying to remove an endpoint that guest 2 is trying to use, the host can reject that command. However, if the guest can issue the control transfer to change the config before it issues the Configure Endpoint command, then the physical USB device will change configurations before the host has a chance to prevent it. The host could snoop the control transfers for config/alt setting changes, but it's probably easier/more efficient to snoop the xHCI commands. But this is all just a theory, since I don't know how the virtualization folks are going to implement xHCI virtualization. > > > Regardless, this is how we should do it. At the same time, we could > > > include a feature allowing drivers to decrease the de facto maxpacket > > > values below the values given in the endpoint descriptors, for improved > > > scheduling performance. > > > > Well, ok, but the xHCI host controller will only accept powers of 2 max > > packet sizes, > > Sez who? In my copy of the xHCI spec, 6.2.3.5 states that the Max > Packet Size field shall be set to the value defined in bits 10:0 of the > endpoint descriptor. Those values don't have to be powers of two. Oh, sorry, you're right. I was thinking of the interval value, which is bound to powers of two. Disregard that paragraph. :) > > so will there be any drivers that will benefit from that? > > Didn't we discuss this some time ago? Video drivers that won't be > using the full resolution available in a particular altsetting are an > example. The packets they will transfer will be smaller than the > maxpacket values listed in the descriptors. Ok, sure. If the drivers are always transferring one frame per packet, and the resolution is smaller, the packets will be consistently smaller, correct? We'll never have the case where the driver will need to transfer some larger packet for an endpoint that has a custom Max Packet size? > > Also, what about USB 3.0 devices, where multiple max packets can be > > burst at one time? The devices that set bMaxBurst are required to set > > the max packet size to a fixed value. I think USB 3.0 isochronous > > devices can do bursts, but I would have to check. > > Yes, bursting might complicate the issue. Still, drivers should be > able to tell the scheduling code how much of the potential bandwidth > they actually intend to use. Yes, and the endpoints that benefit most from bursting (bulk endpoints) won't count against the bus bandwidth. The transfers will just be fit in as necessary. > > I'm more interested in making sure that the drivers that use a different > > polling interval than advertised in the endpoint descriptors are able to > > change the xHCI interval. > > Why would a driver want to use a different interval? Generally > speaking, with isochronous transfers you _can't_ reasonably change the > interval. With interrupt transfers you can, but the only effect is to > decrease latency -- why wouldn't the latency in the descriptor be > adequate? Aren't there some devices that advertise the interval in the wrong format? I.e. using frames for high speed device intervals instead of microframes? I thought I saw some quirks in the webcam drivers for fixing those intervals up. I originally thought that one of my HS webcams had this issue, but I've double checked, and the issue is with the interrupt endpoint, not the isochronous endpoint like I thought. [ 2310.414885] usb 3-2: Driver uses different interval (8 microframes) than xHCI (128 microframes) [ 2310.430819] xhci_hcd 0000:06:00.0: ep 0x87 - asked for 16 bytes, 9 bytes untransferred lsusb output: Bus 003 Device 004: ID 046d:09a5 Logitech, Inc. Quickcam 3000 For Business Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 ... Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x87 EP 7 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0010 1x 16 bytes bInterval 8 Now, the kernel doc around urb->interval is not very clear. It says: * @interval: Specifies the polling interval for interrupt or isochronous * transfers. The units are frames (milliseconds) for full and low * speed devices, and microframes (1/8 millisecond) for highspeed * and SuperSpeed devices. So is interval supposed to be decoded as a straight value, like for a HS device, would an urb->interval of 8 mean 8 microframes? Or is the USB core and host controller drivers supposed to decode urb->interval using the same encoding from the USB bus specifications? I assumed that the kernel doc meant the xHCI driver was supposed to interpret urb->interval as a straight value of either frames or microframes. So the converted interval from the endpoint descriptor is 2^(8-1) = 128 microframes, and that's what the xHCI driver has told the host to use as a polling interval. uvcvideo sets urb->interval to 8, which if it's a straight value is 8 microframes, which is != 128. Is this a bug in the uvcvideo driver, or a misinterpretation on my part? There are several drivers that set urb->interval to the interval from the descriptor without checking what speed the device is operating at. There are other drivers that set it to some constant, and I can't tell if that matches the device descriptors at all. So it's difficult to tell whether drivers are using different polling intervals than what the device specifies. > > If the host rejects a new configuration or alt setting, we should stay > > with the current setup, without attempting to change the device. If the > > device rejects the change by stalling the control transfer to set the > > new configuration or alt setting, we need to revert the host state back > > to the old setup. > > In the case of config changes, we _can't_. It's already too late -- > by the time the bandwidth change is tested or the control transfer is > sent, we have already unbound the existing drivers and called > device_del() for the old interfaces. Even trying to reinstall the old > config from scratch would be very difficult; it would be much simpler > to send a Set-Config(0) request. I see. And we can't test whether the configuration change will be accepted by the host controller without first unbinding the drivers and making sure all urbs are canceled... yarg! We could attempt to grab a new slot context and set up the different configuration that we want. But I don't think that would work (even if we tell the host controller to skip the Address Device control transfer) because the slot context has info about the route string of the device (even for USB 2.0 devices). Any paranoid hardware implementation would throw a context error if two slots with the same route string were added. Blech! > In the case of altsetting changes, we can. In fact, that's what we do > now. (Although not entirely correctly...) Are there bugs in the alt setting fall back mechanism? Caused by my bandwidth code or something else? Sarah Sharp -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html