Hi,
On 30.11.2023 18:28, Lizhi Hou wrote:
Added Jan Kuliga who submitted a similar change.
Thanks for CC'ing me to the other patchset. I'm currently working on
interleaved-DMA transfers implementation for XDMA. While testing it,
I've come across a flaw in mine patch you mentioned here (and it also
exists in the Miquel's patch).
https://lore.kernel.org/dmaengine/20231124192524.134989-1-jankul@xxxxxxxxxxxxxxxx/T/#m20c1ca4bba291f6ca07a8e5fbcaeed9fd0a6f008 >
Thanks,
Lizhi
On 11/30/23 03:13, Miquel Raynal wrote:
The driver is capable of starting scatter-gather transfers and needs to
wait until their end. It is also capable of starting cyclic transfers
and will only be "reset" next time the channel will be reused. In
practice most of the time we hear no audio glitch because the sound card
stops the flow on its side so the DMA transfers are just
discarded. There are however some cases (when playing a bit with a
number of frames and with a discontinuous sound file) when the sound
card seems to be slightly too slow at stopping the flow, leading to a
glitch that can be heard.
In all cases, we need to earn better control of the DMA engine and
adding proper ->device_terminate_all() and ->device_synchronize()
callbacks feels totally relevant. With these two callbacks, no glitch
can be heard anymore.
Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfers")
Signed-off-by: Miquel Raynal <miquel.raynal@xxxxxxxxxxx>
---
This was only tested with cyclic transfers.
---
drivers/dma/xilinx/xdma.c | 68 +++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
index e931ff42209c..290bb5d2d1e2 100644
--- a/drivers/dma/xilinx/xdma.c
+++ b/drivers/dma/xilinx/xdma.c
@@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xchan)
return ret;
xchan->busy = true;
+
+ return 0;
+}
+
+/**
+ * xdma_xfer_stop - Stop DMA transfer
+ * @xchan: DMA channel pointer
+ */
+static int xdma_xfer_stop(struct xdma_chan *xchan)
+{
+ struct virt_dma_desc *vd = vchan_next_desc(&xchan->vchan);
+ struct xdma_device *xdev = xchan->xdev_hdl;
+ int ret;
+
+ if (!vd || !xchan->busy)
+ return -EINVAL;
+
+ /* clear run stop bit to prevent any further auto-triggering */
+ ret = regmap_write(xdev->rmap, xchan->base + XDMA_CHAN_CONTROL_W1C,
+ CHAN_CTRL_RUN_STOP);
+ if (ret)
+ return ret;
Shouldn't status register be cleared prior to using it next time? It can
be cleared-on-read by doing a read from a separate register (offset 0x44).
+
+ xchan->busy = false;
+
return 0;
}
@@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan
*chan)
spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
}
+/**
+ * xdma_terminate_all - Terminate all transactions
+ * @chan: DMA channel pointer
+ */
+static int xdma_terminate_all(struct dma_chan *chan)
+{
+ struct xdma_chan *xdma_chan = to_xdma_chan(chan);
+ struct xdma_desc *desc = NULL;
+ struct virt_dma_desc *vd;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
+ xdma_xfer_stop(xdma_chan);
+
+ vd = vchan_next_desc(&xdma_chan->vchan);
+ if (vd)
+ desc = to_xdma_desc(vd);
+ if (desc) {
+ dma_cookie_complete(&desc->vdesc.tx);
Prior to a call to vchan_terminate_vdesc(), the vd node has to be
deleted from vc.desc_issued list. Otherwise, if there is more than one
descriptor present on that list, its link with list's head is going to
be lost and freeing resources associated with it will become impossible
(doing so results in dma_pool_destroy() failure). I noticed it when I
was playing with a large number of interleaved DMA TXs.
+ vchan_terminate_vdesc(&desc->vdesc);
+ }
+
+ vchan_get_all_descriptors(&xdma_chan->vchan, &head);
+ spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
+ vchan_dma_desc_free_list(&xdma_chan->vchan, &head);
+
+ return 0;
+}
+
+/**
+ * xdma_synchronize - Synchronize terminated transactions
+ * @chan: DMA channel pointer
+ */
+static void xdma_synchronize(struct dma_chan *chan)
+{
+ struct xdma_chan *xdma_chan = to_xdma_chan(chan);
+
+ vchan_synchronize(&xdma_chan->vchan);
+}
+
/**
* xdma_prep_device_sg - prepare a descriptor for a DMA transaction
* @chan: DMA channel pointer
@@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pdev)
xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
xdev->dma_dev.device_config = xdma_device_config;
xdev->dma_dev.device_issue_pending = xdma_issue_pending;
+ xdev->dma_dev.device_terminate_all = xdma_terminate_all;
+ xdev->dma_dev.device_synchronize = xdma_synchronize;
xdev->dma_dev.filter.map = pdata->device_map;
xdev->dma_dev.filter.mapcnt = pdata->device_map_cnt;
xdev->dma_dev.filter.fn = xdma_filter_fn;
I have already prepared a patch with an appropriate fix, which I'm going
to submit with the whole patch series, once I have interleaved DMA
transfers properly sorted out (hopefully soon). Or maybe should I post
this patch with fix, immediately as a reply to the already sent one?
What do you prefer?
Thanks,
Jan