For some years and since many kernel versions there are reports that the RX UART SDMA channel stops working at some point. The workaround was to disable DMA for RX. This commit fixes the problem itself. Cyclic DMA transfers are used by uart and other drivers and these can fail in at least two cases where we can run out of descriptors available to the engine: - Interrupts are disabled for too long and all buffers are filled with data, especially in a setup where many small dma transfers are being executed only using a tiny part of a single buffer - DMA errors (such as generated by baud rate mismatch with imx-uart) use up all descriptors before we can react. In this case, SDMA stops the channel and no further transfers are done until the respective channel is disabled and re-enabled. We can check if the channel has been stopped and re-enable it then. To distinguish from the the case that the channel was stopped by upper-level driver we introduce new flag IMX_DMA_ACTIVE. As sdmac->desc is constant we can move desc out of the loop. Fixes: 1ec1e82f2510 ("dmaengine: Add Freescale i.MX SDMA support") Signed-off-by: Philipp Puschmann <philipp.puschmann@xxxxxxxxx> Signed-off-by: Jan Luebbe <jlu@xxxxxxxxxxxxxx> Reviewed-by: Lucas Stach <l.stach@xxxxxxxxxxxxxx> --- Changelog v5: - join with patch version from Jan Luebbe - adapt comments and patch descriptions Changelog v4: - fixed the fixes tag Changelog v3: - use correct dma_wmb() instead of dma_wb() - add fixes tag Changelog v2: - clarify comment and commit description drivers/dma/imx-sdma.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index b42281604e54..0b1d6a62423d 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -383,6 +383,7 @@ struct sdma_channel { }; #define IMX_DMA_SG_LOOP BIT(0) +#define IMX_DMA_ACTIVE BIT(1) #define MAX_DMA_CHANNELS 32 #define MXC_SDMA_DEFAULT_PRIORITY 1 @@ -658,6 +659,9 @@ static int sdma_config_ownership(struct sdma_channel *sdmac, static void sdma_enable_channel(struct sdma_engine *sdma, int channel) { + struct sdma_channel *sdmac = &sdma->channel[channel]; + + sdmac->flags |= IMX_DMA_ACTIVE; writel(BIT(channel), sdma->regs + SDMA_H_START); } @@ -774,16 +778,17 @@ static void sdma_start_desc(struct sdma_channel *sdmac) static void sdma_update_channel_loop(struct sdma_channel *sdmac) { + struct sdma_engine *sdma = sdmac->sdma; struct sdma_buffer_descriptor *bd; + struct sdma_desc *desc = sdmac->desc; int error = 0; - enum dma_status old_status = sdmac->status; + enum dma_status old_status = sdmac->status; /* * loop mode. Iterate over descriptors, re-setup them and * call callback function. */ - while (sdmac->desc) { - struct sdma_desc *desc = sdmac->desc; + while (desc) { bd = &desc->bd[desc->buf_tail]; @@ -822,6 +827,18 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac) if (error) sdmac->status = old_status; } + + /* In some situations it may happen that the sdma does not find any + * usable descriptor in the ring to put data into. The channel is + * stopped then and after having freed some buffers we have to restart + * it manually. + */ + if ((sdmac->flags & IMX_DMA_ACTIVE) && + !(readl_relaxed(sdma->regs + SDMA_H_STATSTOP) & BIT(sdmac->channel))) { + dev_err_ratelimited(sdma->dev, "SDMA channel %d: cyclic transfer disabled by HW, reenabling\n", + sdmac->channel); + writel(BIT(sdmac->channel), sdma->regs + SDMA_H_START); + }; } static void mxc_sdma_handle_channel_normal(struct sdma_channel *data) @@ -1051,7 +1068,8 @@ static int sdma_disable_channel(struct dma_chan *chan) struct sdma_engine *sdma = sdmac->sdma; int channel = sdmac->channel; - writel_relaxed(BIT(channel), sdma->regs + SDMA_H_STATSTOP); + sdmac->flags &= ~IMX_DMA_ACTIVE; + writel(BIT(channel), sdma->regs + SDMA_H_STATSTOP); sdmac->status = DMA_ERROR; return 0; -- 2.23.0