On Fri, 19 Feb 2021 14:40:12 +0200 Alexandru Ardelean <alexandru.ardelean@xxxxxxxxxx> wrote: > From: Lars-Peter Clausen <lars@xxxxxxxxxx> > > This change adds support for cyclic DMA transfers using the IIO buffer DMA > infrastructure. > To do this, userspace must set the IIO_BUFFER_BLOCK_FLAG_CYCLIC flag on the > block when enqueueing them via the ENQUEUE_BLOCK ioctl(). > > Signed-off-by: Lars-Peter Clausen <lars@xxxxxxxxxx> > Signed-off-by: Alexandru Ardelean <alexandru.ardelean@xxxxxxxxxx> Series in general looks good to me, but this change needs a little more detail + probably some level of example userspace flow. I don't really understand how this is used! Also, it's easy to test output buffers with the kfifo support so we should be able to move forward quickly with that part (1-3, 4 is probably fine as well as clearly harmless). The dma stuff worries me more, at least partly based on the experience of the original dma buffers which basically sat their unused (in upstream) for a very long time. So to move these forward, they need to come with users... Thanks, Jonathan > --- > .../buffer/industrialio-buffer-dmaengine.c | 24 ++++++++++++------- > include/uapi/linux/iio/buffer.h | 1 + > 2 files changed, 17 insertions(+), 8 deletions(-) > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > index 65458a6cc81a..39cc230c7991 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c > @@ -82,14 +82,22 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, > > direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM; > > - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > - block->phys_addr, block->block.bytes_used, direction, > - DMA_PREP_INTERRUPT); > - if (!desc) > - return -ENOMEM; > - > - desc->callback_result = iio_dmaengine_buffer_block_done; > - desc->callback_param = block; > + if (block->block.flags & IIO_BUFFER_BLOCK_FLAG_CYCLIC) { > + desc = dmaengine_prep_dma_cyclic(dmaengine_buffer->chan, > + block->phys_addr, block->block.bytes_used, > + block->block.bytes_used, direction, 0); > + if (!desc) > + return -ENOMEM; > + } else { > + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, > + block->phys_addr, block->block.bytes_used, direction, > + DMA_PREP_INTERRUPT); > + if (!desc) > + return -ENOMEM; > + > + desc->callback_result = iio_dmaengine_buffer_block_done; > + desc->callback_param = block; > + } > > cookie = dmaengine_submit(desc); > if (dma_submit_error(cookie)) > diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h > index 4e4ee9befea1..1bde508fe1b9 100644 > --- a/include/uapi/linux/iio/buffer.h > +++ b/include/uapi/linux/iio/buffer.h > @@ -33,6 +33,7 @@ struct iio_buffer_block_alloc_req { > > /* A function will be assigned later for BIT(0) */ > #define IIO_BUFFER_BLOCK_FLAG_RESERVED (1 << 0) > +#define IIO_BUFFER_BLOCK_FLAG_CYCLIC (1 << 1) > > /** > * struct iio_buffer_block - Descriptor for a single IIO block