From: Jonas Oberhauser <jonas.oberhauser@xxxxxxxxxxxxxxx> Herd7 transforms successful RMW with Mb tags by inserting smp_mb() fences around them. We emulate this by considering imaginary po-edges before the RMW read and before the RMW write, and extending the smp_mb() ordering rule, which currently only applies to real po edges that would be found around a really inserted smp_mb(), also to cases of the only imagined po edges. Reported-by: Viktor Vafeiadis <viktor@xxxxxxxxxxx> Suggested-by: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Jonas Oberhauser <jonas.oberhauser@xxxxxxxxxxxxxxx> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> Reviewed-by: Boqun Feng <boqun.feng@xxxxxxxxx> Tested-by: Boqun Feng <boqun.feng@xxxxxxxxx> --- tools/memory-model/linux-kernel.cat | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat index adf3c4f412296..d7e7bf13c831b 100644 --- a/tools/memory-model/linux-kernel.cat +++ b/tools/memory-model/linux-kernel.cat @@ -34,6 +34,16 @@ let R4rmb = R \ Noreturn (* Reads for which rmb works *) let rmb = [R4rmb] ; fencerel(Rmb) ; [R4rmb] let wmb = [W] ; fencerel(Wmb) ; [W] let mb = ([M] ; fencerel(Mb) ; [M]) | + (* + * full-barrier RMWs (successful cmpxchg(), xchg(), etc.) act as + * though there were enclosed by smp_mb(). + * The effect of these virtual smp_mb() is formalized by adding + * Mb tags to the read and write of the operation, and providing + * the same ordering as though there were additional po edges + * between the Mb tag and the read resp. write. + *) + ([M] ; po ; [Mb & R]) | + ([Mb & W] ; po ; [M]) | ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) | ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) | ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) | -- 2.40.1