Re: memory-barriers.txt again (was Re: [PATCH 4/9] firewire: don't use PREPARE_DELAYED_WORK)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul,

On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Date:   Sun Feb 23 08:34:24 2014 -0800

     Documentation/memory-barriers.txt: Clarify release/acquire ordering

     This commit fixes a couple of typos and clarifies what happens when
     the CPU chooses to execute a later lock acquisition before a prior
     lock release, in particular, why deadlock is avoided.

     Reported-by: Peter Hurley <peter@xxxxxxxxxxxxxxxxxx>
     Reported-by: James Bottomley <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx>
     Reported-by: Stefan Richter <stefanr@xxxxxxxxxxxxxxxxx>
     Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 9dde54c55b24..c8932e06edf1 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1674,12 +1674,12 @@ for each construct.  These operations all imply certain barriers:
       Memory operations issued after the ACQUIRE will be completed after the
       ACQUIRE operation has completed.

-     Memory operations issued before the ACQUIRE may be completed after the
-     ACQUIRE operation has completed.  An smp_mb__before_spinlock(), combined
-     with a following ACQUIRE, orders prior loads against subsequent stores and
-     stores and prior stores against subsequent stores.  Note that this is
-     weaker than smp_mb()!  The smp_mb__before_spinlock() primitive is free on
-     many architectures.
+     Memory operations issued before the ACQUIRE may be completed after
+     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
+     combined with a following ACQUIRE, orders prior loads against
+     subsequent loads and stores and also orders prior stores against
+     subsequent stores.  Note that this is weaker than smp_mb()!  The
+     smp_mb__before_spinlock() primitive is free on many architectures.

   (2) RELEASE operation implication:

@@ -1717,23 +1717,47 @@ the two accesses can themselves then cross:

  	*A = a;
  	ACQUIRE M
-	RELEASE M
+	RELEASE N
  	*B = b;

  may occur as:

-	ACQUIRE M, STORE *B, STORE *A, RELEASE M

This example should remain as is; it refers to the porosity of a critical
section to loads and stores occurring outside that critical section, and
importantly that LOCK + UNLOCK is not a full barrier. It documents that
memory operations from either side of the critical section may cross
(in the absence of other specific memory barriers). IOW, it is the example
to implication #1 above.

Regards,
Peter Hurley

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux