Hi Jeremy Trimble, > -----Original Message----- > From: Jeremy Trimble [mailto:jeremy.trimble@xxxxxxxxx] > Sent: Friday, June 19, 2015 10:19 PM > To: Appana Durga Kedareswara Rao > Cc: Vinod Koul; dan.j.williams@xxxxxxxxx; Michal Simek; Soren Brinkmann; > Appana Durga Kedareswara Rao; Anirudha Sarangi; Punnaiah Choudary > Kalluri; dmaengine@xxxxxxxxxxxxxxx; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx; > linux-kernel@xxxxxxxxxxxxxxx; Srikanth Thokala > Subject: Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine > driver support > > > +/** > > + * xilinx_dma_start_transfer - Starts DMA transfer > > + * @chan: Driver specific channel struct pointer */ static void > > +xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) { > > + struct xilinx_dma_tx_descriptor *desc; > > + struct xilinx_dma_tx_segment *head, *tail = NULL; > > + > > + if (chan->err) > > + return; > > + > > + if (list_empty(&chan->pending_list)) > > + return; > > + > > + if (!chan->idle) > > + return; > > + > > + desc = list_first_entry(&chan->pending_list, > > + struct xilinx_dma_tx_descriptor, > > + node); > > + > > + if (chan->has_sg && xilinx_dma_is_running(chan) && > > + !xilinx_dma_is_idle(chan)) { > > + tail = list_entry(desc->segments.prev, > > + struct xilinx_dma_tx_segment, node); > > + dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys); > > + goto out_free_desc; > > + } > > + > > + if (chan->has_sg) { > > + head = list_first_entry(&desc->segments, > > + struct xilinx_dma_tx_segment, node); > > + tail = list_entry(desc->segments.prev, > > + struct xilinx_dma_tx_segment, node); > > + dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys); > > + } > > + > > + /* Enable interrupts */ > > + dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL, > > + XILINX_DMA_XR_IRQ_ALL_MASK); > > + > > + xilinx_dma_start(chan); > > + if (chan->err) > > + return; > > + > > + /* Start the transfer */ > > + if (chan->has_sg) { > > + dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys); > > + } else { > > + struct xilinx_dma_tx_segment *segment; > > + struct xilinx_dma_desc_hw *hw; > > + > > + segment = list_first_entry(&desc->segments, > > + struct xilinx_dma_tx_segment, node); > > + hw = &segment->hw; > > + > > + if (desc->direction == DMA_MEM_TO_DEV) > > + dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR, > > + hw->buf_addr); > > + else > > + dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR, > > + hw->buf_addr); > > + > > + /* Start the transfer */ > > + dma_ctrl_write(chan, XILINX_DMA_REG_BTT, > > + hw->control & XILINX_DMA_MAX_TRANS_LEN); > > + } > > + > > +out_free_desc: > > + list_del(&desc->node); > > + chan->idle = false; > > + chan->active_desc = desc; > > +} > > What prevents chan->active_desc from being overwritten before the > previous descriptor is transferred to done_list. For instance, if two transfers > are queued with issue_pending() in quick succession (such that > xilinx_dma_start_transfer() is called twice before the interrupt for the first > transfer occurs), won't the first descriptor be overwritten and lost? Yes there is some flaws in this implementation. Will fix it in the next version of the patch. Regards, Kedar. This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately. ��.n��������+%������w��{.n��������)�)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥