Re: ehci-sched.c uses wMaxPacketSize but should use actual isoc urb packetsize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 11/29/2009 10:16 PM, Alan Stern wrote:
On Sun, 29 Nov 2009, Hans de Goede wrote:

Hi,

On 11/29/2009 06:42 PM, Alan Stern wrote:
On Sun, 29 Nov 2009, Hans de Goede wrote:

Hi,

I've the following problem, which has lead me to believe that for
scheduling isoc streams ehci-sched.c should uses the actual isoc
urb packetsize instead of wMaxPacketSize.

This can't be done.  The URB packet size isn't known at the time the
scheduling is performed.  That is, the packet size for the _current_
URB is known but not the sizes for URBs to be submitted in the future.


That is true, but I would assume that it is normal for all urb's
in an isoc stream to be the same packet size,

Normal perhaps, but it's very common for the packet sizes to vary.
For example, consider an audio stream running at the usual CD data
rate: 44100 samples per second.  That's 44.1 samples per frame, so one
out of every ten packets will have to be larger than the others.


Note I'm not talking about the size of the packets actually send by
the hardware, those can fluctuate pretty wildly (at least with webcams),
I'm talking about the buffer size allocated inside the urbs as submitted
from the driver to the core. And my proposal is to use that buffersize
(atleast the one of the initial urb) to calculate the (maximum)
bandwidth usage.

For a driver to submits multiple urbs for the same isoc ep with different
buffer sizes would be a really strange thing todo IMHO, and as said
we can add a check for drivers which do and simply report an error then.

  after scheduling one could
check that future URB's have the same size and if not error out. The
same is already done for interval.

The usage of the interval field is probably going to change as well --
it will become purely an output parameter.

On the other hand, it has been suggested that new programming
interfaces be added to the core, allowing drivers to change the
interval and maxpacket values.  This would affect both the host's
copies of the descriptors and the bandwidth allocation.


Hmm, I don't really like this, but it could work. That would mean in case
of an OHCI host, that the driver would need to do something
that the webcam driver would need to do something to enforce ed_get()
re-calculating the bandwidth after it has changed the maxpacketsize.

I could make the driver try to reserve bandwidth first, and then based
on what it manged to get set maxpacketsize.

Note what it currently does for webcams which do have alt settings, is simply
try to start the stream at the highest bandwidth alt setting, if that does
not work (fails with ENOSPC), try again at a lower alt-setting, rince repeat.

But I could make it use bandwidth reservation for this special case,
assuming using bandwidth reservation does not causes a call to ed_get(),
or I could reset the alt-setting each time I change maxpacketsize, so:
calculate new maxpacketsize
set alt 0
set alt 1
override maxpacketsize

This should also cause ed_get() to recalculate the load using the new
maxpacketsize.

But there is a kind of chicken-and-egg problem.  If the values provided
by the device would require too much bandwidth, they won't get
installed and the driver won't get loaded.  Hence it won't have a
chance to change the values.


Hmm, I assume you are talking about the new allocate upon setting the alt
setting model, right. This is not yet in place right ? And I wonder will this
influence usb1 devices at all (assuming they are directly plugged into the root
hub) ?

Regards,

Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux