On Fri, 07 Jul 2023, Jonas Oberhauser <jonas.oberhauser@xxxxxxxxxxxxxxx> wrote: [...] >> This is a request for comments on extending the atomic builtins API to >> help avoiding redundant memory barriers. Indeed, there are >> discrepancies between the Linux kernel consistency memory model (LKMM) >> and the C11/C++11 memory consistency model [0]. For example, >> fully-ordered atomic operations like xchg and cmpxchg success in LKMM >> have implicit memory barriers before/after the operations [1-2], while >> atomic operations using the __ATOMIC_SEQ_CST memory order in C11/C++11 >> do not have any ordering guarantees of an atomic thread fence >> __ATOMIC_SEQ_CST with respect to other non-SEQ_CST operations [3]. > > > The issues run quite a bit deeper than this. The two models have two > completely different perspectives that are quite much incompatible. Agreed. Our intent is not to close the gap completely, but to reduce the gap between the two models, by supporting the "full barrier before/after" semantic of LKMM in the C11/C++11 memory model. > I think all you can really do is bridge the gap at the level of the > generated assembly. I.e., don't bridge the gap between LKMM and the > C11 MCM. Bridge the gap between the assembly code generated by C11 > atomics and the one generated by LKMM. But I'm not sure that's really > the task here. We have considered analyzing the assembler output of different toolchain's versions to generate manually our own before/after fences. However, nothing prevents a toolchain from changing the emitted assembler in the future, which would make things fragile. The only thing that is guaranteed to not change is the definitions in the standard (C11/C++11). Anything else is fair game for optimizations. >> [...] For example, to make Read-Modify-Write (RMW) operations match >> the Linux kernel "full barrier before/after" semantics, the liburcu's >> uatomic API has to emit both a SEQ_CST RMW operation and a subsequent >> thread fence SEQ_CST, which leads to duplicated barriers in some cases. > > Does it have to though? Can't you just do e.g. an release RMW > operation followed by an after_atomic fence? And for loads, a > SEQ_CST fence followed by an acquire load? Analogously (but: mirrored) > for stores. That would not improve anything for RMW. Consider the following example and its resulting assembler on x86-64 gcc 13.1 -O2: int exchange(int *x, int y) { int r = __atomic_exchange_n(x, y, __ATOMIC_RELEASE); __atomic_thread_fence(__ATOMIC_SEQ_CST); return r; } exchange: movl %esi, %eax xchgl (%rdi), %eax lock orq $0, (%rsp) ;; Redundant with previous exchange ret also that would make the exchange weaker, in the sense of the C11/C++11 memory model, by losing its acquire and its sequential consistency semantics. [...] >> // Always NOP. >> __atomic_thread_fence_{before,after}_rmw(int rmw_memorder, >> int fence_memorder) > > > I currently don't feel comfortable adding such extensions to LKMM (or a > compiler API for that matter). There's no plan to add such extensions to LKMM but only to extend the current atomic builtins API of toolchains. > You mentioned that the goal is to check some code written using LKMM > primitives with TSAN due to some formal requirements. What exactly do > these requirements entail? Do you need to check the code exactly as it > will be executed (modulo the TSAN instrumentation)? Is it an option to > map to normal builtins with suboptimal performance just for the > verification purpose, but then run the slightly more optimized > original code later? We aim to validate with TSAN the code that will run during production, minus TSAN itself. > Specifically for TSAN's ordering requirements, you may need to make > LKMM's RMWs into acq+rel with an extra mb, even if all that extra > ordering isn't necessary at the assembler level. > > > Also note that no matter what you do, due to the two different > perspectives, TSAN's hb relation may introduce false positive data > races w.r.t. LKMM. For example, if the happens-before ordering is > guaranteed through pb starting with coe/fre. This is why we have implemented our primitives and changed our algorithms so that they use the acquire/release semantics of the C11/C++11 memory model. > Without thinking too hard, it seems to me no matter what fences and > barriers you introduce, TSAN will not see this kind of ordering and > consider the situation a data race. We have come to the same conclusion, mainly because TSAN does not support thread fence in its verifications. This is why we have implemented an annotation layer that group relaxed memory accesses into a single acquire/release event. This layer, makes TSAN aware of the happen-before relations of the RCU implementations -- and lock-free data structures -- in Userspace RCU. This came with a downside of introducing redundant fences on strongly ordered architectures because of the usage of atomic builtins of the C11/C++11 memory model which does not provide any means for expressing fully-ordered atomic operations without relying on explicit thread fences. [...] Thanks, Olivier -- Olivier Dion EfficiOS Inc. https://www.efficios.com