Re: Handling hardware resource limitiations in HCDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,
thank you for your very helpful reply.

On Mon, May 4, 2009 at 3:15 AM, David Brownell <david-b@xxxxxxxxxxx> wrote:
>
> A close analogy is the musb_hcd driver, where the silicon
> has a bunch FIFOs to be shared among all the transfers
> submitted to it.  They come in pairs (RX, TX) except for
> the one used with control transfers ... max 30 FIFOs, plus
> the one for control transfers.  (And its OTG hardware, so
> this applies in the same way to the peripheral side.)
>
Ah I hadn't seen that driver (I  only looked in usb/drivers/host not
usb/drivers/musb)
It is indeed a similar scheme although I can associate any transfer
type to any ETD (no reserving a special one for control)

> Hos-side allocation of FIFOs there is currently sub-optimal,
> but that mostly affects periodic transfers.  From memory:
>
>  - Control transfers are always queued
>
>  - Bulk too, except (a) TX and RX are usually separate,
>   (b) if there's an available FIFO it may be used,
>   and (c) for RX, a NAK timeout can trigger the sort
>   of software round-robin that EHCI/OHCI/UHCI do in
>   hardware -- to prevent starvation.
>
Yes looking at the code it first tries to allocate a private FIFO and
if there are none left it uses a reserved shared FIFO for bulk
transfers with the special NAK handling you describe.

> The bulk RX NAK timeout scheduling was pretty important,
> since many drivers just leave read parked until data
> arrivest ... and that previously would starve other
> devices.
>
Interesting I hadn't thought about that...

>> If no more ETDs are available can the driver return a failure status for
>> urb_enqueue() or must it queue locally?
>> In the former case which status (ENOMEM, ENXIO ??)
>> In the latter should there be a limit on the queue size?
>
> I'd queue locally in at least bulk and control cases,
> since the USB interface drivers generally won't handle
> errors.  For periodic transfers they *should* have at
> least basic fault handling ... it may not be especially
> effective though.
>
I can do that it's just that it didn't seem necessary to me.
As ETDs (FIFOs) correspond to active endpoints and I have 32 I have
difficultly imagining a use case where they would all be used [some of
the exmples you cite later have many less FIFOs). Most devices only
seem to have a control ep and a couple of bulk/isoc ones. So it will
require > 10 devices on the bus and concurrently in use before this
becomes a problem. While of course possible this seems fairly unlikely
to me (for an embedded device remember). But maybe there are devices
that use many more eps??

That also brings up the question of how to test ETD / endpoint queuing
- as I understand it usbtest only submits URBs for one or two
endpoints. I only have one device to use with usbtest - sure I could
do other USB stuff at the same time (mass storage, webcam, ..) but
that won't reach the 32 endpoint limit. Probably the easiest way is to
artificially reduce the number of available ETDs just for the test.

That being said I don't think it should be too difficult to implement
the queuing, I just want to be sure it corresponds to a realistic
usage scenario.
The anti starvation NAK timeout thing sounds much more complicated though.

>
> Again that resembles the musb_hdrc code.  That memory
> is used for the FIFOs, not for the transfers; and it's
> possible to configure endpoints without double buffering.
>
Yes indeed I don't need to put the whole transfer in data memory, just
2*maxpacket (for transfers > maxpacket).
it goes like this:

* Assign a hardware ETD
* Obtain  offsets in data memory for 2* maxpacket
* Write those offsets to fields in the ETD
* Setup DMA
* Starrt transfer

So the HC hardware transfers packets from the USB to data memory and
DMA transfers from data memory to system memory.
The HCD never has to actually write or read data memory (as its all
done by DMA) but needs to manage the offsets so that concurrent
transfers don't stomp on each other.

> That allocation is done statically, at driver setup time.
>
> Silicon like that on DaVinci chips has just 4K SRAM for
> allocation between FIFOs; and just four FIFO pairs (plus
> ep0).  OMAP3 chips, and tusb60x0, have 16K of SRAM and
> fifteen FIFO pairs (plus ep0).
>
Strange, I have twice as many "FIFOs" as the largest one you mention
but  the same SRAM as the smallest..

>
>> With no isoc transfers worst case data memory usage (full speed only) 32
>> * 64 * 2 =  4K => no data memory exhaustion.
>>
> This seems to have gotten rudel yline wrapped ...
>
Yes :( - its not very important though I just wanted to show in which
cases data memory shortage could occur.

>> In fact it could be worse because the 4K data memory is shared between
>> the USB host and USB function modules although there is no gadget driver
>> code yet.
>
> So, very much like the musb_hdrc silicion ... especially if
> this is OTG hardware (and if it isn't, why bother sharing?)
>
>
Yes it is OTG hardware but I only have host side code for the moment.

>> 2) urb_enqueue() for isoc transfers will attempt to allocate data memory
>> immediately (and fail if there isn't enough).
>> 3) urb_enqueue() for non isoc transfers will not allocate data memory.
>
> Your life might be simpler if you statically allocate
> that memory.  :)
>
I've actually already got the dynamic allocation part working and it
doesn't seem too complicated. I'm just missing the queueing when it
fails.
As I have (compared to the hardware you mention) a largish number of
ETDs(FIFOs) but relatively little data memory I think dynamic
allocation is better since preallocating data memory to ETDs would
waste some. Also the code I looked at results in different sizes of
FIFOs meaning that you could have the situation where you need a
"large fifo" but only have multiple "small fifios'" available which
would lead to unnecessary queueing.


Regards,
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux