Re: [PATCH 1/2] dmaengine: rcar-dmac: add iommu support for slave transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vinod,

On Monday 25 January 2016 00:38:33 Laurent Pinchart wrote:
> On Monday 18 January 2016 19:06:29 Vinod Koul wrote:
> > On Thu, Jan 14, 2016 at 03:59:40PM +0200, Laurent Pinchart wrote:
> > > On Thursday 14 January 2016 09:22:25 Vinod Koul wrote:
> > >> On Thu, Jan 14, 2016 at 01:13:20AM +0200, Laurent Pinchart wrote:
> > >>> On Wednesday 13 January 2016 14:55:50 Niklas Söderlund wrote:
> > >>>> * Vinod Koul <vinod.koul@xxxxxxxxx> [2016-01-13 19:06:01 +0530]:
> > >>>>> On Mon, Jan 11, 2016 at 03:17:46AM +0100, Niklas Söderlund wrote:
> > >>>>>> Enable slave transfers to devices behind IPMMU:s by mapping the
> > >>>>>> slave addresses using the dma-mapping API.
> > >>>>>> 
> > >>>>>> Signed-off-by: Niklas Söderlund
> > >>>>>> <niklas.soderlund+renesas@xxxxxxxxxxxx>
> > >>>>>> ---
> > >>>>>> 
> > >>>>>>  drivers/dma/sh/rcar-dmac.c | 64 +++++++++++++++++++++++++++++++---
> > >>>>>>  1 file changed, 60 insertions(+), 4 deletions(-)
> > >>>>>> 
> > >>>>>> diff --git a/drivers/dma/sh/rcar-dmac.c
> > >>>>>> b/drivers/dma/sh/rcar-dmac.c
> > >>>>>> index 7820d07..da94809 100644
> > >>>>>> --- a/drivers/dma/sh/rcar-dmac.c
> > >>>>>> +++ b/drivers/dma/sh/rcar-dmac.c
> > >>>>>> @@ -13,6 +13,7 @@
> > >>>>>>  #include <linux/dma-mapping.h>
> > >>>>>>  #include <linux/dmaengine.h>
> > >>>>>>  #include <linux/interrupt.h>
> > >>>>>> +#include <linux/iommu.h>
> > >>>>>>  #include <linux/list.h>
> > >>>>>>  #include <linux/module.h>
> > >>>>>>  #include <linux/mutex.h>
> > >>>>>> @@ -1101,6 +1102,24 @@ rcar_dmac_prep_dma_cyclic(struct dma_chan
> > >>>>>> *chan, dma_addr_t buf_addr,
> > >>>>>>  	return desc;
> > >>>>>>  }
> > >>>>>> 
> > >>>>>> +static dma_addr_t __rcar_dmac_dma_map(struct dma_chan *chan,
> > >>>>>> phys_addr_t addr,
> > >>>>>> +		size_t size, enum dma_data_direction dir)
> > >>>>>> +{
> > >>>>>> +	struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
> > >>>>>> +	struct page *page = phys_to_page(addr);
> > >>>>>> +	size_t offset = addr - page_to_phys(page);
> > >>>>>> +	dma_addr_t map = dma_map_page(chan->device->dev, page, offset,
> > >>>>>> size,
> > >>>>>> +			dir);
> > >>>>> 
> > >>>>> Hmmmm, dmaengine APIs for slave cases expect that client has already
> > >>>>> ammped and provided an address which the dmaengine understands. So
> > >>>>> doing this in driver here does not sound good to me
> > >>>> 
> > >>>> It was my understanding that clients do not do this mapping and in
> > >>>> fact are expected not to. Is this not what Linus Walleij is trying to
> > >>>> address in '[PATCH] dmaengine: use phys_addr_t for slave
> > >>>> configuration'?
> > >>> 
> > >>> There's a problem somewhere and we need to fix it. Clients currently
> > >>> pass physical addresses and the DMA engine API expects a DMA address.
> > >>> There's only two ways to fix that, either modify the API to expect a
> > >>> phys_addr_t, or modify the clients to provide a dma_addr_t.
> > >> 
> > >> Okay I am in two minds for this, doing phys_addr_t seems okay but
> > >> somehow I feel we should rather pass dma_addr_t and dmaengien driver
> > >> get
> > >> a right dma address to use and thus fix the clients, that maybe the
> > >> right thing to do here, thoughts...?
> > > 
> > > Given that there should be more clients than DMA engine drivers, and
> > > given that knowledge of what has to be done to map a physical address to
> > > a DMA address accessible by the DMA engine should not be included in
> > > client drivers (in most case I assume using the DMA mapping API will be
> > > enough, but details may vary), I believe it makes more sense to pass a
> > > phys_addr_t and let the DMA engine drivers handle it.
> > > 
> > > There's another issue I just remembered. Consider the following cases.
> > > 
> > > 1. DMA engine channel that has an optional IOMMU covering both the src
> > > and dst side. In that case mapping can be performed by the client or DMA
> > > engine driver, the DMA mapping API will handle the IOMMU behind the
> > > scene.
> > > 
> > > 2. DMA engine channel that has an optional IOMMU on the memory side and
> > > no support for IOMMU on the slave (in the sense of the register in front
> > > of the client's FIFO) side. In that case a client mapping buffers on
> > > both the src and dst side would set an IOMMU mapped address for the
> > > slave side, which wouldn't work. If the DMA engine driver were to
> > > perform the mapping then it could skip it on the slave side, knowing
> > > that the slave side has no IOMMU.
> > > 
> > > 3. DMA engine channel that has independently optional IOMMUs on both
> > > sides. This can't be supported today as we have a single struct device
> > > per channel and thus can't configure the IOMMU independently on the two
> > > sides.
> > > 
> > > It's getting messy :-)
> > 
> > Yes I do agree on that, but the problem is today none of the slave drivers
> > expect or do the mapping, changing that will cause issues...
> > 
> > And how many do really have an IOMMU behind them, few out of large set we
> > have...
> 
> Today neither the DMA engine drivers nor the client drivers do the mapping,
> so we have any issue anyway. The question is on which side to solve it. If
> I understand correctly you fear that mapping the address in the DMA engine
> drivers would cause issues with client drivers that don't expect that
> behaviour, but I don't really see where the issue is. Could you please
> elaborate ?

Ping. I don't think we're very far from finding an agreement on this topic. If 
you prefer we could discuss it on IRC, it can be faster than e-mail.

> >>> The assumption from API was always that the client should perform the
> >>> mapping...
> >>> 
> >>>> The struct device used to map buffer through the DMA mapping API needs
> >>>> to be the DMA engine struct device, not the client struct device. As
> >>>> the client is not expected to have access to the DMA engine device I
> >>> would argue that DMA engines should perform the mapping and the API
> >>>> should take a phys_addr_t.
> >>> 
> >>> That is not a right assumption. Once the client gets a channel, they
> >>> have access to dmaengine device and should use that to map. Yes the key
> >>> is to map using dmaengine device and not client device. You can use
> >>> chan->device->dev.
> >> 
> >> Right, that's required by the DMA engine API even when not using slave
> >> transfers. Which raises an interesting consistency issue in the API, I
> >> agree about that.
> >> 
> >>>> Vinod, unless you have reasons to do it otherwise, can we get your ack
> >>>> on this approach and start hammering at the code ? The problem has
> >>>> remained known and unfixed for too long, we need to move on.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux