On Sat, 27 Jun 2009, Daniel Drake wrote: > Alan Stern wrote: > > There's also an issue of running out of kernel memory. I don't know > > how to judge how important that might be. We didn't face the problem > > before because libusb-0.1 submitted the chunks one-by-one (good for > > memory usage but bad for throughput). > > > > How does libusb-1.0 behave? If it submits the broken-up URBs all at > > once then we already face the out-of-memory problem. On the other > > hand, if it submits them one-by-one then it shouldn't have any trouble > > stopping when a short packet is received. > > It submits them all-at-once. So yes, memory pressure will be high if the > user submits a lot. > I have not heard of any problems here, at least not yet. Still, I prefer not to make the problem worse by enshrining this sort of thing in a stable API. > > Also, in order to solve Daniel's problem I've got another scheme that > > doesn't require the UNBLOCKEP ioctl. It would yield higher bandwidth > > always. (The idea is to add another USBDEVFS_URB flag to mark the > > first URB of an async transfer. When usbfs sees this flag it will stop > > aborting URBs and unblock the endpoint.) > > This sounds good. > > > Another problem to keep in mind as we attack this one... Example > situation, which we've had a report of: > > libusb user submits a 128kb transfer to read from an endpoint. > It times out, or the user decides to cancel. So libusb cancels all 8 > URBs, one by one. > > However, during the cancellation process, data starts arriving. So we > get something like the following - > URB 1: cancelled, 0 bytes of data arrived > URB 2: cancelled, 0 bytes of data arrived > URB 3: cancelled, 64 bytes of data arrived > URB 4: cancelled, 64 bytes of data arrived > URB 5: cancelled, 128 bytes of data arrived > URB 6: cancelled, 64 bytes of data arrived > URB 7: cancelled, 64 bytes of data arrived > URB 8: cancelled, 64 bytes of data arrived > > libusb currently loses all that data. With my recent patch it will now > put it in the buffer as if it arrived contiguously, but this is still a > bit difficult to handle at the application-level. An easier approach is for libusb to cancel the URBs in reverse order. Then any partial data will all be lined up nicely at the start of the buffer, where it belongs. Libusb won't have to do any special rearranging. > Instead it would be nice if we could cancel them all at once, so that we > don't get that trickle of data which probably belongs in the next > logical transfer request. No, you're wrong about that. By definition the partial data belongs to the current transfer. If the device had meant to send two logical transfers' worth of data then it would have terminated the first transfer early by sending a short packet. > This could either be done with your block/unblock ioctls, or we could > add an alternative to the cancellation ioctl with semantics of > "atomically cancel all URBs on this endpoint until the next one with > Alan's new flag set" This isn't necessary if libusb cancels its URBs in reverse order. > And while I'm writing my shopping list...another similar problem that > would be nice to solve, very similar to the above..: > > libusb user submits a 16kb single-URB transfer to read from an endpoint, > then decides to cancel it because of a timeout or something. > > However, a few packets have already started trickling in. So when the > cancellation completes, there are (say) 128 bytes of data that have been > received. > > libusb currently loses that data but will now present it in the buffer > as a result of my recent patch. But that is quite inconvenient for the > application developer, because that's probably the beginning of the next > logical transfer. What exactly do you mean by this? _What's_ the beginning of the next logical transfer? Those 128 bytes of data? > When they come to fire off the next transfer, they'll > be missing the first 64 bytes unless they set up some reasonably complex > buffering system. Missing the first 64 bytes? How does that match up with the 128 bytes that were received earlier? > Solutions for this one... Perhaps some kind of new ioctl with "cancel > but only if no data has arrived yet" semantics? Is that possible? Since I don't fully understand the question, I can't really answer it. However... Anybody who cancels a transfer before it has completed is then obligated to resynchronize with the device. There are lots of ways this could be done; which to use will depend on how the device works. Regardless, this is the sort of thing which has to be handled at the application level -- not by libusb or the kernel. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html