> -----Original Message----- > From: Vinod Koul <vkoul@xxxxxxxxxx> > Sent: Friday, March 6, 2020 7:04 PM > To: Sebastian von Ohr <vonohr@xxxxxxxxxxx>; Appana Durga Kedareswara > Rao <appanad@xxxxxxxxxx>; Radhey Shyam Pandey <radheys@xxxxxxxxxx>; > Michal Simek <michals@xxxxxxxxxx> > Cc: dmaengine@xxxxxxxxxxxxxxx > Subject: Re: [PATCH] dmaengine: xilinx_dma: Add missing check for empty list Minor nit - Better to also add <...> "in device_tx_status callback " > > On 03-03-20, 14:05, Sebastian von Ohr wrote: > > The DMA transfer might finish just after checking the state with > > dma_cookie_status, but before the lock is acquired. Not checking for > > an empty list in xilinx_dma_tx_status may result in reading random > > data or data corruption when desc is written to. This can be reliably > > triggered by using dma_sync_wait to wait for DMA completion. > > Appana, Radhey can you please test this..? Sure, we will test it. Changes look fine. Though had a question in mind, for a generic fix to this problem, should we make locking mandatory for all cookie helper functions? Or is there any limitation? The framework say for dma_cookie_status says locking is not required. This scenario is a race condition when the driver calls dma_cookie_status and it sees it's not completed, but then since there is no locking and dma completion comes and it changes cookie state and removes the element from active list to done list. When driver access it in tx_status it results in data corruption/crash. > > > > > Signed-off-by: Sebastian von Ohr <vonohr@xxxxxxxxxxx> > > --- > > drivers/dma/xilinx/xilinx_dma.c | 20 ++++++++++---------- > > 1 file changed, 10 insertions(+), 10 deletions(-) > > > > diff --git a/drivers/dma/xilinx/xilinx_dma.c > > b/drivers/dma/xilinx/xilinx_dma.c index a9c5d5cc9f2b..5d5f1d0ce16c > > 100644 > > --- a/drivers/dma/xilinx/xilinx_dma.c > > +++ b/drivers/dma/xilinx/xilinx_dma.c > > @@ -1229,16 +1229,16 @@ static enum dma_status > xilinx_dma_tx_status(struct dma_chan *dchan, > > return ret; > > > > spin_lock_irqsave(&chan->lock, flags); > > - > > - desc = list_last_entry(&chan->active_list, > > - struct xilinx_dma_tx_descriptor, node); > > - /* > > - * VDMA and simple mode do not support residue reporting, so the > > - * residue field will always be 0. > > - */ > > - if (chan->has_sg && chan->xdev->dma_config->dmatype != > XDMA_TYPE_VDMA) > > - residue = xilinx_dma_get_residue(chan, desc); > > - > > + if (!list_empty(&chan->active_list)) { > > + desc = list_last_entry(&chan->active_list, > > + struct xilinx_dma_tx_descriptor, node); > > + /* > > + * VDMA and simple mode do not support residue reporting, > so the > > + * residue field will always be 0. > > + */ > > + if (chan->has_sg && chan->xdev->dma_config->dmatype != > XDMA_TYPE_VDMA) > > + residue = xilinx_dma_get_residue(chan, desc); > > + } > > spin_unlock_irqrestore(&chan->lock, flags); > > > > dma_set_residue(txstate, residue); > > -- > > 2.17.1 > > -- > ~Vinod