The xilinx_dma is completely broken since the referenced commit, because if the (seg->hw.status & XILINX_DMA_BD_COMP_MASK) is not set for whatever reason, the current descriptor is never moved to the done list, and the DMA stops moving data. Isolate the newly added check to DMA which does implement irq_delay, that way the new check is matching what is likely some new bit in a new core, without breaking the DMA for older versions of the same core. Fixes: 7bcdaa658102 ("dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit") Signed-off-by: Marek Vasut <marex@xxxxxxx> --- Cc: "Uwe Kleine-König" <u.kleine-koenig@xxxxxxxxxxxx> Cc: Michal Simek <michal.simek@xxxxxxx> Cc: Peter Korsgaard <peter@xxxxxxxxxxxxx> Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xxxxxxx> Cc: Vinod Koul <vkoul@xxxxxxxxxx> Cc: dmaengine@xxxxxxxxxxxxxxx Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx --- drivers/dma/xilinx/xilinx_dma.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index 1bdd57de87a6e..48647c8a64a5b 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -1718,7 +1718,8 @@ static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan) return; list_for_each_entry_safe(desc, next, &chan->active_list, node) { - if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { + if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA && + chan->irq_delay) { struct xilinx_axidma_tx_segment *seg; seg = list_last_entry(&desc->segments, -- 2.45.2