Re: [PATCH v2 4/6] spi: davinci: flush caches when performing DMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 17, 2017 at 12:36:17PM +0100, Frode Isaksen wrote:
> On 17/02/2017 12:22, Russell King - ARM Linux wrote:
> > The DMA API deals with the _kernel_ lowmem mapping.  It has no knowledge
> > of any other aliases in the system.  When you have a VIVT cache (as all
> > old ARM CPUs have) then if you access the memory through a different
> > alias from the kernel lowmem mapping (iow, vmalloc) then the DMA API
> > can't help you.
> >
> > However, the correct place to use flush_kernel_vmap_range() etc is not
> > in drivers - it's supposed to be done in the callers that know that
> > the memory is aliased.
> 
> OK, so this should be done in the ubifs layer instead ? xfs already does
> this, but no other fs.

These APIs were created when XFS was being used on older ARMs and people
experienced corruption.  XFS was the only filesystem driver which wanted
to do this (horrid, imho) DMA to memory that it accessed via a vmalloc
area mapping.

If ubifs is also doing this, it's followed XFS down the same route, but
ignored the need for additional flushing.

The down-side to adding this at the filesystem layer is that you get the
impact whether or not the driver does DMA.  However, for XFS that's
absolutely necessary, as block devices will transfer to the kernel lowmem
mapping, which itself will alias with the vmalloc area mapping.

SPI is rather another special case - rather than SPI following the
established mechanism of passing data references via scatterlists or
similar, it also passes them via virtual addresses, which means SPI
can directly access the vmalloc area when performing PIO.  This
really makes the problem more complex, because it means that if you
do have a SPI driver that does that, it's going to be reading/writing
direct from vmalloc space.

That's not a problem as long as the data is only accessed via vmalloc
space, but it will definitely go totally wrong if the data is
subsequently mapped into userspace.

The other important thing to realise is that the interfaces in
cachetlb.txt assume that it's the lowmem mapping that will be accessed,
and the IO device will push that data out to physical memory (either via
the DMA API, or flush_kernel_dcache_page()).  That's not true of SPI,
as it passes virtual addresses around.

So... overall, I'm not sure that this problem is properly solvable given
SPIs insistance on passing virtual addresses and the differences in this
area between SPI and block.

What I'm quite sure about is that adding yet more cache flushing
interfaces for legacy cache types really isn't the way forward.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux