The cited commit describes that when using writel(), explcit wmb() is not needed. wmb() is an expensive barrier. writel() uses the needed platform specific barrier instead of expensive wmb(). Hence update the example to be more accurate that matches the current implementation. commit 5846581e3563 ("locking/memory-barriers.txt: Fix broken DMA vs. MMIO ordering example") Signed-off-by: Parav Pandit <parav@xxxxxxxxxx> --- changelog: v2->v3: - removed redundant description for writeX() - updated text for alignment and smaller change lines - updated commit log with blank line before signed-off-by line v1->v2: - Further improved description of writel() example - changed commit subject from 'usage' to 'example' v0->v1: - Corrected to mention I/O barrier instead of dma_wmb(). - removed numbered references in commit log - corrected typo 'explcit' to 'explicit' in commit log --- Documentation/memory-barriers.txt | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index 832b5d36e279..8952fd86c6e6 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1927,10 +1927,11 @@ There are some more advanced barrier functions: before we read the data from the descriptor, and the dma_wmb() allows us to guarantee the data is written to the descriptor before the device can see it now has ownership. The dma_mb() implies both a dma_rmb() and - a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed + a dma_wmb(). Note that, when using writel(), a prior barrier is not needed to guarantee that the cache coherent memory writes have completed before writing to the MMIO region. The cheaper writel_relaxed() does not provide - this guarantee and must not be used here. + this guarantee and must not be used here. Hence, writeX() is always + preferred. See the subsection "Kernel I/O barrier effects" for more information on relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for -- 2.26.2