The interrupt handler puts a half-completed DMA descriptor on a free list and then schedules tasklet to process bottom half of the descriptor that executes client's callback, this creates possibility to pick up the busy descriptor from the free list. Thus, let's disallow descriptor's re-use until it is fully processed. Cc: <stable@xxxxxxxxxxxxxxx> Acked-by: Jon Hunter <jonathanh@xxxxxxxxxx> Signed-off-by: Dmitry Osipenko <digetx@xxxxxxxxx> --- drivers/dma/tegra20-apb-dma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c index 319f31d27014..4a750e29bfb5 100644 --- a/drivers/dma/tegra20-apb-dma.c +++ b/drivers/dma/tegra20-apb-dma.c @@ -281,7 +281,7 @@ static struct tegra_dma_desc *tegra_dma_desc_get( /* Do not allocate if desc are waiting for ack */ list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) { - if (async_tx_test_ack(&dma_desc->txd)) { + if (async_tx_test_ack(&dma_desc->txd) && !dma_desc->cb_count) { list_del(&dma_desc->node); spin_unlock_irqrestore(&tdc->lock, flags); dma_desc->txd.flags = 0; -- 2.24.0