On Mon, Oct 25, 2021 at 10:54:16PM +0800, Boqun Feng wrote: > diff --git a/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus > new file mode 100644 > index 000000000000..955b9c7cdc7f > --- /dev/null > +++ b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus > @@ -0,0 +1,33 @@ > +C LB+unlocklockonceonce+poacquireonce > + > +(* > + * Result: Never > + * > + * If two locked critical sections execute on the same CPU, all accesses > + * in the first must execute before any accesses in the second, even if > + * the critical sections are protected by different locks. One small nit; the above "all accesses" reads as if: spin_lock(s); WRITE_ONCE(*x, 1); spin_unlock(s); spin_lock(t); r1 = READ_ONCE(*y); spin_unlock(t); would also work, except of course that's the one reorder allowed by TSO. > + *) > + > +{} > + > +P0(spinlock_t *s, spinlock_t *t, int *x, int *y) > +{ > + int r1; > + > + spin_lock(s); > + r1 = READ_ONCE(*x); > + spin_unlock(s); > + spin_lock(t); > + WRITE_ONCE(*y, 1); > + spin_unlock(t); > +} > + > +P1(int *x, int *y) > +{ > + int r2; > + > + r2 = smp_load_acquire(y); > + WRITE_ONCE(*x, 1); > +} > + > +exists (0:r1=1 /\ 1:r2=1)