Hi Ludo, On Mon, Mar 23, 2015 at 02:25:13PM +0100, Ludovic Desroches wrote: > On Sat, Mar 21, 2015 at 12:42:06PM -0700, Maxime Ripard wrote: > > This framework aims at easing the development of dmaengine drivers by providing > > generic implementations of the functions usually required by dmaengine, while > > abstracting away most of the logic required. > > For sure it will ease writing new dmaengine drivers! Moreover, adopting > this framework will convert the driver to use virtual channels isn't it? Yes, it relies on virt-dma > > It is very relevant for controllers that have more requests than channels, > > where you need to have some scheduling that is usually very bug prone, and > > shouldn't be implemented in each and every driver. > > > > This introduces a new set of callbacks for drivers to implement the device > > specific behaviour. These new sets of callbacks aims at providing both how to > > handle channel related operations (start the transfer of a new descriptor, > > pause, resume, etc.) and the LLI related operations (iterator and various > > accessors). > > > > So far, the only transfer types supported are memcpy and slave transfers, but > > eventually should support new transfer types as new drivers get converted. > > > > Signed-off-by: Maxime Ripard <maxime.ripard@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/dma/Kconfig | 4 + > > drivers/dma/Makefile | 1 + > > drivers/dma/scheduled-dma.c | 571 ++++++++++++++++++++++++++++++++++++++++++++ > > drivers/dma/scheduled-dma.h | 140 +++++++++++ > > 4 files changed, 716 insertions(+) > > create mode 100644 drivers/dma/scheduled-dma.c > > create mode 100644 drivers/dma/scheduled-dma.h > > > > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > > index a874b6ec6650..032bf5fcd58b 100644 > > --- a/drivers/dma/Kconfig > > +++ b/drivers/dma/Kconfig > > @@ -406,6 +406,7 @@ config DMA_SUN6I > > depends on RESET_CONTROLLER > > select DMA_ENGINE > > select DMA_VIRTUAL_CHANNELS > > + select DMA_SCHEDULED > > help > > Support for the DMA engine first found in Allwinner A31 SoCs. > > > > @@ -431,6 +432,9 @@ config DMA_ENGINE > > config DMA_VIRTUAL_CHANNELS > > tristate > > > > +config DMA_SCHEDULED > > + bool > > + > > I think, it should select DMA_VIRTUAL_CHANNELS Yep, indeed. > > config DMA_ACPI > > def_bool y > > depends on ACPI > > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > > index f915f61ec574..1db31814c749 100644 > > --- a/drivers/dma/Makefile > > +++ b/drivers/dma/Makefile > > @@ -5,6 +5,7 @@ obj-$(CONFIG_DMA_ENGINE) += dmaengine.o > > obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o > > obj-$(CONFIG_DMA_ACPI) += acpi-dma.o > > obj-$(CONFIG_DMA_OF) += of-dma.o > > +obj-$(CONFIG_DMA_SCHEDULED) += scheduled-dma.o > > > > obj-$(CONFIG_INTEL_MID_DMAC) += intel_mid_dma.o > > obj-$(CONFIG_DMATEST) += dmatest.o > > diff --git a/drivers/dma/scheduled-dma.c b/drivers/dma/scheduled-dma.c > > new file mode 100644 > > index 000000000000..482d04f2ccbc > > --- /dev/null > > +++ b/drivers/dma/scheduled-dma.c > > @@ -0,0 +1,571 @@ > > +/* > > + * Copyright (C) 2015 Maxime Ripard > > + * Maxime Ripard <maxime.ripard@xxxxxxxxxxxxxxxxxx> > > + * > > + * This program is free software; you can redistribute it and/or modify > > + * it under the terms of the GNU General Public License as published by > > + * the Free Software Foundation; either version 2 of the License, or > > + * (at your option) any later version. > > + */ > > + > > +#include <linux/dmaengine.h> > > +#include <linux/dmapool.h> > > +#include <linux/list.h> > > +#include <linux/slab.h> > > + > > +#include "scheduled-dma.h" > > +#include "virt-dma.h" > > + > > +static struct sdma_request *sdma_pop_queued_transfer(struct sdma *sdma, > > + struct sdma_channel *schan) > > +{ > > + struct sdma_request *sreq = NULL; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sdma->lock, flags); > > + > > + /* No requests are awaiting an available channel */ > > + if (list_empty(&sdma->pend_reqs)) > > + goto out; > > + > > + /* > > + * If we don't have any validate_request callback, any request > > + * can be dispatched to any channel. > > + * > > + * Remove the first entry and return it. > > + */ > > + if (!sdma->ops->validate_request) { > > + sreq = list_first_entry(&sdma->pend_reqs, > > + struct sdma_request, node); > > + list_del_init(&sreq->node); > > + goto out; > > + } > > + > > + list_for_each_entry(sreq, &sdma->pend_reqs, node) { > > + /* > > + * Ask the driver to validate that the request can > > + * happen on the channel. > > + */ > > + if (sdma->ops->validate_request(schan, sreq)) { > > + list_del_init(&sreq->node); > > + goto out; > > + } > > + > > + /* Otherwise, just keep looping */ > > + } > > + > > +out: > > + spin_unlock_irqrestore(&sdma->lock, flags); > > + > > + return sreq; > > +} > > + > > +/* > > + * Besoin d'une fonction pour pusher un descriptor vers un pchan > > + * > > + * Flow normal: > > + * - Election d'un pchan (Framework) > > + * - Push d'un descripteur vers le pchan (Driver) > > + * - idle.... > > + * - interrupt (driver) > > + * - Transfert terminé, notification vers le framework (driver) > > + * - Nouveau transfert dans la queue? > > + * + Si oui, on est reparti > > + * + Si non, on sort de l'interrupt, le pchan est libéré > > + */ > > + > > +struct sdma_desc *sdma_report(struct sdma *sdma, > > + struct sdma_channel *schan, > > + enum sdma_report_status status) > > +{ > > + struct sdma_desc *sdesc = NULL; > > + struct virt_dma_desc *vdesc; > > + struct sdma_request *sreq; > > + > > + switch (status) { > > + case SDMA_REPORT_TRANSFER: > > + /* > > + * We're done with the current transfer that was in this > > + * physical channel. > > + */ > > + vchan_cookie_complete(&schan->desc->vdesc); > > + > > + /* > > + * Now, try to see if there's any queued transfer > > + * awaiting an available channel. > > + * > > + * If not, just bail out, and mark the pchan as > > + * available. > > + * > > + * If there's such a transfer, validate that the > > + * driver can handle it, and ask it to do the > > + * transfer. > > + */ > > + sreq = sdma_pop_queued_transfer(sdma, schan); > > + if (!sreq) { > > + list_add_tail(&schan->node, &sdma->avail_chans); > > + return NULL; > > + } > > + > > + /* Mark the request as assigned to a particular channel */ > > + sreq->chan = schan; > > + > > + /* Retrieve the next transfer descriptor */ > > + vdesc = vchan_next_desc(&sreq->vchan); > > + schan->desc = sdesc = to_sdma_desc(&vdesc->tx); > > + > > + break; > > + > > + default: > > + break; > > + } > > + > > + return sdesc; > > +} > > +EXPORT_SYMBOL_GPL(sdma_report); > > + > > +static enum dma_status sdma_tx_status(struct dma_chan *chan, > > + dma_cookie_t cookie, > > + struct dma_tx_state *state) > > +{ > > + struct sdma_request *sreq = to_sdma_request(chan); > > + struct sdma *sdma = to_sdma(chan->device); > > + struct sdma_channel *schan = sreq->chan; > > + struct virt_dma_desc *vd; > > + struct sdma_desc *desc; > > + enum dma_status ret; > > + unsigned long flags; > > + size_t bytes = 0; > > + void *lli; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + ret = dma_cookie_status(chan, cookie, state); > > + if (ret == DMA_COMPLETE) > > + goto out; > > + > > + vd = vchan_find_desc(&sreq->vchan, cookie); > > + desc = to_sdma_desc(&vd->tx); > > + > > + if (vd) { > > + lli = desc->v_lli; > > + while (true) { > > + bytes += sdma->ops->lli_size(lli); > > + > > + if (!sdma->ops->lli_has_next(lli)) > > + break; > > + > > + lli = sdma->ops->lli_next(lli); > > + } > > + } else if (chan) { > > + bytes = sdma->ops->channel_residue(schan); > > + } > > + > > + dma_set_residue(state, bytes); > > + > > +out: > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return ret; > > +}; > > + > > +static int sdma_config(struct dma_chan *chan, > > + struct dma_slave_config *config) > > +{ > > + struct sdma_request *sreq = to_sdma_request(chan); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + memcpy(&sreq->cfg, config, sizeof(*config)); > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return 0; > > +} > > + > > +static int sdma_pause(struct dma_chan *chan) > > +{ > > + struct sdma_request *sreq = to_sdma_request(chan); > > + struct sdma_channel *schan = sreq->chan; > > + struct sdma *sdma = to_sdma(chan->device); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + /* > > + * If the request is currently scheduled on a channel, just > > + * pause the channel. > > + * > > + * If not, remove the request from the pending list. > > + */ > > + if (schan) { > > + sdma->ops->channel_pause(schan); > > + } else { > > + spin_lock(&sdma->lock); > > + list_del_init(&sreq->node); > > + spin_unlock(&sdma->lock); > > + } > > + > > + spin_unlock_irqrestore(&sreq->vchan.lock, flags); > > + > > + return 0; > > +} > > + > > +static int sdma_resume(struct dma_chan *chan) > > +{ > > + struct sdma_request *sreq = to_sdma_request(chan); > > + struct sdma_channel *schan = sreq->chan; > > + struct sdma *sdma = to_sdma(chan->device); > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sreq->vchan.lock, flags); > > + > > + /* > > + * If the request is currently scheduled on a channel, just > > + * resume the channel. > > + * > > + * If not, add the request from the pending list. > > + */ > > Is such a case possible? Yes, it's possible, if you have more requests than channels, and all channels are currently busy with other transfers, we can end up in a case where we have a requests where issue_pending has been called, but is not currently scheduled on any channel, for an unknown amount of time. In such a case, you might very well end up with clients calling pause/resume on these requests. > The request should be already in the pending list, isn't it? Not really, it would have been removed by sdma_pause. > By the way, I may have missed something but where do you add a > request to the pending list? I have seen it only in the resume case. It should be in issue_pending and pause. If not, it's a bug :) Thanks! Maxime -- Maxime Ripard, Free Electrons Embedded Linux, Kernel and Android engineering http://free-electrons.com
Attachment:
signature.asc
Description: Digital signature