On Friday 01 May 2009, Martin Fuzzey wrote: > Hi all, > While working on the i.MX21 HCD I have a few questions on how hardware > resource limits should be handled. > > The hardware has 32 "ETDs", my driver allocates an ETD for each active > endpoint. That is the ETD is allocated in the urb_enqueue() function IF > there is no URB already equeued for the endpoint and is freed when there > are no more URBs for the endpoint. A close analogy is the musb_hcd driver, where the silicon has a bunch FIFOs to be shared among all the transfers submitted to it. They come in pairs (RX, TX) except for the one used with control transfers ... max 30 FIFOs, plus the one for control transfers. (And its OTG hardware, so this applies in the same way to the peripheral side.) On the host side they need to be associated with a given peripheral endpoint, and possibly set up to go through a transaction translator (high speed). So conceptually they seem pretty similar to your ETDs. Hos-side allocation of FIFOs there is currently sub-optimal, but that mostly affects periodic transfers. From memory: - Control transfers are always queued - Bulk too, except (a) TX and RX are usually separate, (b) if there's an available FIFO it may be used, and (c) for RX, a NAK timeout can trigger the sort of software round-robin that EHCI/OHCI/UHCI do in hardware -- to prevent starvation. - Periodic (interrupt or isochronous) transfers tie up a FIFO until their endpoint goes idle ... even when they *could* share one, e.g. alternating frames. The bulk RX NAK timeout scheduling was pretty important, since many drivers just leave read parked until data arrivest ... and that previously would starve other devices. > If no more ETDs are available can the driver return a failure status for > urb_enqueue() or must it queue locally? > In the former case which status (ENOMEM, ENXIO ??) > In the latter should there be a limit on the queue size? I'd queue locally in at least bulk and control cases, since the USB interface drivers generally won't handle errors. For periodic transfers they *should* have at least basic fault handling ... it may not be especially effective though. There's no natural limit on the queue sizes, so I wouldn't try to invent one. > As isoc transfers use 2 ETDs for double buffering this limits the number > of non-isoc active endpoints to 32 - 2*isoc > > Furthermore the hardware has 4K of "data memory" which must be allocated > to transfers. > Non isoc transfers each require 2*maxpacket of data memory. Again that resembles the musb_hdrc code. That memory is used for the FIFOs, not for the transfers; and it's possible to configure endpoints without double buffering. That allocation is done statically, at driver setup time. Silicon like that on DaVinci chips has just 4K SRAM for allocation between FIFOs; and just four FIFO pairs (plus ep0). OMAP3 chips, and tusb60x0, have 16K of SRAM and fifteen FIFO pairs (plus ep0). > With no isoc transfers worst case data memory usage (full speed only) 32 > * 64 * 2 = 4K => no data memory exhaustion. > > When isoc transfers come into play its gets a bit more complicated. > Taking table 5-4 from the USB spec and eliminating the cases that are > impossible for this hardware (32 ETD limit, 2 ETDs/isoc => 16 max > simultaneous). Then calculating memory usage: > > Payload #Transfers DMEM used ETDs used DMEM left ETDs > left DMEM/ETD maxPacket max64 > 128 10 2560 20 1536 > 12 128 64 12 > 256 5 2560 10 1536 > 22 69 34 12 > 512 2 2048 4 2048 > 28 73 36 16 > 1023 1 2046 2 2050 > 30 68 34 16 This seems to have gotten rudel yline wrapped ... > maxPacket = max packet size for non isoc endpoints if all available ETDs > used > max64 = number of 64 byte endpoints that could be supported concurrently > with the isoc transfers > > So this shows the 128byte payload cannot not exhaust memory either but > the others could. > > In fact it could be worse because the 4K data memory is shared between > the USB host and USB function modules although there is no gadget driver > code yet. So, very much like the musb_hdrc silicion ... especially if this is OTG hardware (and if it isn't, why bother sharing?) > My current idea is : > 1) urb_enqueue() will fail for all transfer types if no ETD is available. Suggest you make sure you can queue control and bulk. Just pre-allocate the ETDs they'll need, with memory. Make that work, then consider getting fancy and using dynamic allocation if the resources are available. > 2) urb_enqueue() for isoc transfers will attempt to allocate data memory > immediately (and fail if there isn't enough). > 3) urb_enqueue() for non isoc transfers will not allocate data memory. Your life might be simpler if you statically allocate that memory. :) > 4) when data memory for non isoc transfers is required and not available > the transfer will be put on a "pending" queue and processed when memory > becomes available. > > Is this reasonable? Maybe; I'm not quite clear on this "data memory" bit. - Dave > > Thanks, > > Martin > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-usb" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html