On Tue, Oct 26, 2021 at 09:01:00AM +0200, Peter Zijlstra wrote: > On Mon, Oct 25, 2021 at 10:54:16PM +0800, Boqun Feng wrote: > > diff --git a/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus > > new file mode 100644 > > index 000000000000..955b9c7cdc7f > > --- /dev/null > > +++ b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus > > @@ -0,0 +1,33 @@ > > +C LB+unlocklockonceonce+poacquireonce > > + > > +(* > > + * Result: Never > > + * > > + * If two locked critical sections execute on the same CPU, all accesses > > + * in the first must execute before any accesses in the second, even if > > + * the critical sections are protected by different locks. > > One small nit; the above "all accesses" reads as if: > > spin_lock(s); > WRITE_ONCE(*x, 1); > spin_unlock(s); > spin_lock(t); > r1 = READ_ONCE(*y); > spin_unlock(t); > > would also work, except of course that's the one reorder allowed by TSO. I applied this series with Peter's Acked-by, and with the above comment reading as follows: +(* + * Result: Never + * + * If two locked critical sections execute on the same CPU, all accesses + * in the first must execute before any accesses in the second, even if the + * critical sections are protected by different locks. The one exception + * to this rule is that (consistent with TSO) a prior write can be reordered + * with a later read from the viewpoint of a process not holding both locks. + *) Thank you all! Thanx, Paul