Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote:
> >
> > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > which is used to ensure that prior (both reads and writes) accesses
> > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> >
> > Reviewed-by: Arnd Bergmann <arnd@xxxxxxxx> # for asm-generic
> > Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
> 
> Reviewed-by: Marco Elver <elver@xxxxxxxxxx>

Just checking...  Did these ever get picked up?  It was suggested
that they go up via the arm64 tree, if I remember correctly.

							Thanx, Paul

> > ---
> >  Documentation/memory-barriers.txt | 11 ++++++-----
> >  include/asm-generic/barrier.h     |  8 ++++++++
> >  2 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> > index b12df9137e1c..832b5d36e279 100644
> > --- a/Documentation/memory-barriers.txt
> > +++ b/Documentation/memory-barriers.txt
> > @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
> >
> >   (*) dma_wmb();
> >   (*) dma_rmb();
> > + (*) dma_mb();
> >
> >       These are for use with consistent memory to guarantee the ordering
> >       of writes or reads of shared memory accessible to both the CPU and a
> > @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> >       The dma_rmb() allows us guarantee the device has released ownership
> >       before we read the data from the descriptor, and the dma_wmb() allows
> >       us to guarantee the data is written to the descriptor before the device
> > -     can see it now has ownership.  Note that, when using writel(), a prior
> > -     wmb() is not needed to guarantee that the cache coherent memory writes
> > -     have completed before writing to the MMIO region.  The cheaper
> > -     writel_relaxed() does not provide this guarantee and must not be used
> > -     here.
> > +     can see it now has ownership.  The dma_mb() implies both a dma_rmb() and
> > +     a dma_wmb().  Note that, when using writel(), a prior wmb() is not needed
> > +     to guarantee that the cache coherent memory writes have completed before
> > +     writing to the MMIO region.  The cheaper writel_relaxed() does not provide
> > +     this guarantee and must not be used here.
> >
> >       See the subsection "Kernel I/O barrier effects" for more information on
> >       relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> > index fd7e8fbaeef1..961f4d88f9ef 100644
> > --- a/include/asm-generic/barrier.h
> > +++ b/include/asm-generic/barrier.h
> > @@ -38,6 +38,10 @@
> >  #define wmb()  do { kcsan_wmb(); __wmb(); } while (0)
> >  #endif
> >
> > +#ifdef __dma_mb
> > +#define dma_mb()       do { kcsan_mb(); __dma_mb(); } while (0)
> > +#endif
> > +
> >  #ifdef __dma_rmb
> >  #define dma_rmb()      do { kcsan_rmb(); __dma_rmb(); } while (0)
> >  #endif
> > @@ -65,6 +69,10 @@
> >  #define wmb()  mb()
> >  #endif
> >
> > +#ifndef dma_mb
> > +#define dma_mb()       mb()
> > +#endif
> > +
> >  #ifndef dma_rmb
> >  #define dma_rmb()      rmb()
> >  #endif
> > --
> > 2.35.3
> >



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux