On Thu, 15 Oct 2015, John Tapsell wrote: > I did have one wacky idea. I'm sure it's stupid, but here it is: Is > it at all possible that there's a bug in the linux usb code where a > bInterval value of 1ms is being converted into microframes (8 > microframes) but then because it's a full speed device it's then > incorrectly read as an 8ms delay? I did have a look into the code, > but got thoroughly lost. Any pointers on how I could check my wacky > theory? There is no such bug. Such a thing would have been spotted long, long ago. > I'm just wondering where this 8ms delay comes from. Multiple places: time to submit the request, time to reserve bandwidth for the previously unused interrupt endpoint, time to complete the transfer, all multiplied by 2. You can get more information from usbmon (see Documentation/usb/usbmon.txt in the kernel source). But Greg is right; the protocol you described is terrible. There's no need for a multiple ping-pong interchange like that; all you should need to do is wait for the device to send the next bit (or whatever) of data as soon as it becomes available. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html