On Thu, Oct 19, 2023 at 02:20:35PM +0800, Hou Tao wrote: > Hi Paul, > > On 10/19/2023 12:54 PM, Paul E. McKenney wrote: > > On Thu, Oct 19, 2023 at 09:07:07AM +0800, Hou Tao wrote: > >> Hi Paul, > >> > >> On 10/19/2023 6:28 AM, Paul E. McKenney wrote: > >>> bpf: Fold smp_mb__before_atomic() into atomic_set_release() > >>> > >>> The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set() > >>> immediately preceded by smp_mb__before_atomic() so as to order storing > >>> of ring-buffer consumer and producer positions prior to the atomic_set() > >>> call's clearing of the ->busy flag, as follows: > >>> > >>> smp_mb__before_atomic(); > >>> atomic_set(&rb->busy, 0); > >>> > >>> Although this works given current architectures and implementations, and > >>> given that this only needs to order prior writes against a later write. > >>> However, it does so by accident because the smp_mb__before_atomic() > >>> is only guaranteed to work with read-modify-write atomic operations, > >>> and not at all with things like atomic_set() and atomic_read(). > >>> > >>> Note especially that smp_mb__before_atomic() will not, repeat *not*, > >>> order the prior write to "a" before the subsequent non-read-modify-write > >>> atomic read from "b", even on strongly ordered systems such as x86: > >>> > >>> WRITE_ONCE(a, 1); > >>> smp_mb__before_atomic(); > >>> r1 = atomic_read(&b); > >> The reason is smp_mb__before_atomic() is defined as noop and > >> atomic_read() in x86-64 is just READ_ONCE(), right ? > > The real reason is that smp_mb__before_atomic() is not defined to do > > anything unless followed by an atomic read-modify-write operation, > > and atomic_read(), atomic_64read(), atomic_set(), and so on are not > > read-modify-write operations. > > I see. Thanks for explanation. It seems I did not read > Documentation/atomic_t.txt carefully, it said: > > The barriers: > > smp_mb__{before,after}_atomic() > > only apply to the RMW atomic ops and can be used to augment/upgrade the > ordering inherent to the op. That is the place! > > As you point out, one implementation consequence of this is that > > smp_mb__before_atomic() is nothingness on x86. > > > >> And it seems that I also used smp_mb__before_atomic() in a wrong way for > >> patch [1]. The memory order in the posted patch is > >> > >> process X process Y > >> atomic64_dec_and_test(&map->usercnt) > >> READ_ONCE(timer->timer) > >> timer->time = t > > The above two lines are supposed to be accessing the same field, correct? > > If so, process Y's store really should be WRITE_ONCE(). > > Yes. These two processes are accessing the same field (namely > timer->timer). Is WRITE_ONCE(xx) still necessary when the write of > timer->time in process Y is protected by a spin-lock ? If there is any possibility of a concurrent reader, that is, a reader not holding that same lock, then yes, you should use WRITE_ONCE(). Compilers can do pretty vicious things to unmarked reads and writes. But don't take my word for it, here are a few writeups: o "Who's afraid of a big bad optimizing compiler?" (series) https://lwn.net/Articles/793253, https://lwn.net/Articles/799218 o "An introduction to lockless algorithms" (Paolo Bonzini series) https://lwn.net/Articles/844224, https://lwn.net/Articles/846700, https://lwn.net/Articles/847481, https://lwn.net/Articles/847973, https://lwn.net/Articles/849237, https://lwn.net/Articles/850202 o "Is Parallel Programming Hard, And, If So, What Can You Do About It?" Section 4.3.4 ("Accessing Shared Variables") https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/ perfbook.html > >> // it won't work > >> smp_mb__before_atomic() > >> atomic64_read(&map->usercnt) > >> > >> For the problem, it seems I need to replace smp_mb__before_atomic() by > >> smp_mb() to fix the memory order, right ? > > Yes, because smp_mb() will order the prior store against that later load. > > Thanks. Will fix the patch. Very good! Thanx, Paul > Regards, > Hou > > > > Thanx, Paul > > > >> Regards, > >> Hou > >> > >> [1]: > >> https://lore.kernel.org/bpf/20231017125717.241101-2-houtao@xxxxxxxxxxxxxxx/ > >> > >> > >>> Therefore, replace the smp_mb__before_atomic() and atomic_set() with > >>> atomic_set_release() as follows: > >>> > >>> atomic_set_release(&rb->busy, 0); > >>> > >>> This is no slower (and sometimes is faster) than the original, and also > >>> provides a formal guarantee of ordering that the original lacks. > >>> > >>> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > >>> Acked-by: David Vernet <void@xxxxxxxxxxxxx> > >>> Cc: Andrii Nakryiko <andrii@xxxxxxxxxx> > >>> Cc: Alexei Starovoitov <ast@xxxxxxxxxx> > >>> Cc: Daniel Borkmann <daniel@xxxxxxxxxxxxx> > >>> Cc: Martin KaFai Lau <martin.lau@xxxxxxxxx> > >>> Cc: Song Liu <song@xxxxxxxxxx> > >>> Cc: Yonghong Song <yonghong.song@xxxxxxxxx> > >>> Cc: John Fastabend <john.fastabend@xxxxxxxxx> > >>> Cc: KP Singh <kpsingh@xxxxxxxxxx> > >>> Cc: Stanislav Fomichev <sdf@xxxxxxxxxx> > >>> Cc: Hao Luo <haoluo@xxxxxxxxxx> > >>> Cc: Jiri Olsa <jolsa@xxxxxxxxxx> > >>> Cc: <bpf@xxxxxxxxxxxxxxx> > >>> > >>> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c > >>> index f045fde632e5..0ee653a936ea 100644 > >>> --- a/kernel/bpf/ringbuf.c > >>> +++ b/kernel/bpf/ringbuf.c > >>> @@ -770,8 +770,7 @@ BPF_CALL_4(bpf_user_ringbuf_drain, struct bpf_map *, map, > >>> /* Prevent the clearing of the busy-bit from being reordered before the > >>> * storing of any rb consumer or producer positions. > >>> */ > >>> - smp_mb__before_atomic(); > >>> - atomic_set(&rb->busy, 0); > >>> + atomic_set_release(&rb->busy, 0); > >>> > >>> if (flags & BPF_RB_FORCE_WAKEUP) > >>> irq_work_queue(&rb->work); > >>> > >>> . >