On Tue, 31 May 2011, Sarah Sharp wrote: > On Tue, May 31, 2011 at 02:52:47PM -0400, Alan Stern wrote: > > On Tue, 31 May 2011, Sarah Sharp wrote: > > > So maybe we need to have a different clamping interval based on whether > > > the host is 0.96 or 1.0 based? You can check xhci->hci_version, and > > > clamp it to 10 for hosts with xhci->hci_version < 0x100, and clamp it to > > > 11 otherwise. > > > > Why go to the trouble? Always clamp it to 10. The host system > > software is allowed to poll interrupt endpoints more frequently than > > the endpoint descriptor calls for. > > Sure, but does that have an effect on the driver? For example, if we're > sending audio data more frequently than a device wants it, would the > user be able to tell the difference in the sound? Audio data is sent by way of isochronous endpoints. We're talking about interrupt endpoints; the requirements are not the same. The fundamental characteristics of interrupt transfers are: guaranteed bandwidth, bounded latency, and automatic retry on errors. Reducing the interval below the requested amount does not violate any of those characteristics. > (By the way, several "old timers" in the Intel USB community were highly > disturbed when they heard the Linux can poll endpoints more frequently. > They saw the interval advertised in the endpoint as a contract, not a > suggestion.) They should read the spec more carefully. Quoting the USB-2.0 spec (emphasis is my own): Section 4.7.3: Such data may be presented for transfer by a device at any time and is delivered by the USB at a rate _no slower_ than is specified by the device. Section 5.7: Requesting a pipe with an interrupt transfer type provides the requester with the following: o¢ Guaranteed _maximum_ service period for the pipe Section 5.7.4 (following Table 5-8): The period provided by the system _may be shorter_ than that desired by the device up to the shortest period defined by the USB (125 us microframe or 1 ms frame). The client software and device can depend only on the fact that the host will ensure that the time duration between two transaction attempts with the endpoint will be _no longer_ than the desired period. (I believe that last sentence is slightly weakened in the USB-3.0 spec. The duration between two transaction attempts may be slightly longer than the desired period because the attempts can occur at any point within the scheduled frames or microframes. Thus, if the period is 1 frame, it is valid for a transfer at the beginning of frame 25 to be followed by a transfer near the end of frame 26, resulting in a duration somewhat longer than 1 ms.) There probably are more places where this point is made, but these should be enough to convince anybody. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html