On Mon, Nov 03, 2014 at 09:57:28PM +0530, Vinod Koul wrote: > On Sat, Nov 01, 2014 at 02:29:42AM +0200, Laurent Pinchart wrote: > > Many other drivers suffer from the same problem. While I won't reject your > > proposed fix, I would prefer a more generic approach. > > > > One option that has been discussed previously was to use a work queue to delay > > starting the DMA transfer to an interruptible context where > > pm_runtime_get_sync() could be called. However, as Russell pointed out [1], > > even that won't work in all cases as the DMA slave might need the transfer to > > be started before enabling part of its hardware (OMAP audio seem to be such a > > case). > > > > I've heard a rumor of a possible DMA engine rework to forbid calling the > > descriptor preparation API from atomic context. This could be used as a base > > to implement runtime PM, as DMA slave drivers should not prepare descriptors > > if they don't need to use them. However that's a long term plan, and we need a > > solution sooner than that. > > Well it is not a rumour :) > > I have been contemplating that now that async_tx will be killed so we dont > have to worry about that usage. For the slave dma usage, we can change the > prepare API to be non atomic. I think the users will be okay with approach. > This way drivers can use runtime pm calls in prepare. Except we /do/ have a fair number of places where the prep calls are made from atomic contexts, particularly in serial drivers. You'd need to introduce a tasklet into almost every serial driver which doesn't already have one to restart RX DMA after an error or pause. Eg, drivers/tty/serial/amba-pl011.c drivers/tty/serial/pch_uart.c drivers/tty/serial/imx.c Probably also: drivers/net/ethernet/micrel/ks8842.c There could well be other places as well, I've not gone through and checked exhaustively. -- FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up according to speedtest.net. -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html