On Wednesday 01 October 2008, Gupta, Ajay Kumar wrote: > Currently all the BULK request are multiplexed on one hardware endpoint > and when a wifi or eth device is in use they never release BULK hardware > endpoint so that it can be used by other devices. This causes failure of > serial device when any of wifi/eth is in use. Any driver that keeps a bulk request posted at all times, usually an IN transfer as with most stuff in drivers/net, has this issue. > I am working on to use different hardware endpoint for different BULK > devices and would submit a patch once it is done. Be careful of that strategy. It'll die quickly on a number of the non-OMAP platforms, which don't populte as many endpoints. The strategy I had thought about was to allow use of more endpoints if they were available, as a way to improve performance if enough resources were available ... but primarily, act more like "normal" hardware, and use a mechanism that's currently disabled. That mechanism being NAK limits. See the REVISIT comment in the musb_urb_enqueue() function, where it sets interval to zero for bulk and control transfers. The way it would work: if the NAK limit gets hit, the transfer will stop "early". Finish cleaning it up (DMA might be an issue), rotate that bulk transfer to the end of that bulk queue, stuff the next transfer where that one was, repeat. Using that mechanism on one bulk endpoint would mean it wouldn't be possible for transfers on it to starve everything else going the same direction. Using that on a periodic endpoint would mean not tying down one endpoint doing, say, an every-256-msec hub poll for one hub, while there's no endpoint free for an every-8-msec mouse or keyboard poll ... In short: I strongly encourage you to find a way to use the NAK limit scheme to let incomplete host side transfers stop themselves and free up their resources for re-use, without giving up the ability to continue those transfers later. - Dave -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html