Re: Handling short transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alan Stern wrote:
There's also an issue of running out of kernel memory.  I don't know
how to judge how important that might be.  We didn't face the problem
before because libusb-0.1 submitted the chunks one-by-one (good for
memory usage but bad for throughput).

How does libusb-1.0 behave? If it submits the broken-up URBs all at once then we already face the out-of-memory problem. On the other hand, if it submits them one-by-one then it shouldn't have any trouble stopping when a short packet is received.

It submits them all-at-once. So yes, memory pressure will be high if the user submits a lot.
I have not heard of any problems here, at least not yet.

Also, in order to solve Daniel's problem I've got another scheme that
doesn't require the UNBLOCKEP ioctl.  It would yield higher bandwidth
always.  (The idea is to add another USBDEVFS_URB flag to mark the
first URB of an async transfer.  When usbfs sees this flag it will stop
aborting URBs and unblock the endpoint.)

This sounds good.


Another problem to keep in mind as we attack this one... Example situation, which we've had a report of:

libusb user submits a 128kb transfer to read from an endpoint.
It times out, or the user decides to cancel. So libusb cancels all 8 URBs, one by one.

However, during the cancellation process, data starts arriving. So we get something like the following -
URB 1: cancelled, 0 bytes of data arrived
URB 2: cancelled, 0 bytes of data arrived
URB 3: cancelled, 64 bytes of data arrived
URB 4: cancelled, 64 bytes of data arrived
URB 5: cancelled, 128 bytes of data arrived
URB 6: cancelled, 64 bytes of data arrived
URB 7: cancelled, 64 bytes of data arrived
URB 8: cancelled, 64 bytes of data arrived

libusb currently loses all that data. With my recent patch it will now put it in the buffer as if it arrived contiguously, but this is still a bit difficult to handle at the application-level.

Instead it would be nice if we could cancel them all at once, so that we don't get that trickle of data which probably belongs in the next logical transfer request.

This could either be done with your block/unblock ioctls, or we could add an alternative to the cancellation ioctl with semantics of "atomically cancel all URBs on this endpoint until the next one with Alan's new flag set"



And while I'm writing my shopping list...another similar problem that would be nice to solve, very similar to the above..:

libusb user submits a 16kb single-URB transfer to read from an endpoint, then decides to cancel it because of a timeout or something.

However, a few packets have already started trickling in. So when the cancellation completes, there are (say) 128 bytes of data that have been received.

libusb currently loses that data but will now present it in the buffer as a result of my recent patch. But that is quite inconvenient for the application developer, because that's probably the beginning of the next logical transfer. When they come to fire off the next transfer, they'll be missing the first 64 bytes unless they set up some reasonably complex buffering system.

Solutions for this one... Perhaps some kind of new ioctl with "cancel but only if no data has arrived yet" semantics? Is that possible?


Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux