On Thu, Aug 18, 2022 at 5:07 PM Kuniyuki Iwashima <kuniyu@xxxxxxxxxx> wrote: > > From: Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> > Date: Thu, 18 Aug 2022 15:49:46 -0700 > > On Wed, Aug 17, 2022 at 9:24 PM Kuniyuki Iwashima <kuniyu@xxxxxxxxxx> wrote: > > > > > > A sysctl variable bpf_jit_enable is accessed concurrently, and there is > > > always a chance of data-race. So, all readers and a writer need some > > > basic protection to avoid load/store-tearing. > > > > > > Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64") > > > Signed-off-by: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx> > > > --- > > > arch/arm/net/bpf_jit_32.c | 2 +- > > > arch/arm64/net/bpf_jit_comp.c | 2 +- > > > arch/mips/net/bpf_jit_comp.c | 2 +- > > > arch/powerpc/net/bpf_jit_comp.c | 5 +++-- > > > arch/riscv/net/bpf_jit_core.c | 2 +- > > > arch/s390/net/bpf_jit_comp.c | 2 +- > > > arch/sparc/net/bpf_jit_comp_32.c | 5 +++-- > > > arch/sparc/net/bpf_jit_comp_64.c | 5 +++-- > > > arch/x86/net/bpf_jit_comp.c | 2 +- > > > arch/x86/net/bpf_jit_comp32.c | 2 +- > > > include/linux/filter.h | 2 +- > > > net/core/sysctl_net_core.c | 4 ++-- > > > 12 files changed, 19 insertions(+), 16 deletions(-) > > > > > > diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c > > > index 6a1c9fca5260..4b6b62a6fdd4 100644 > > > --- a/arch/arm/net/bpf_jit_32.c > > > +++ b/arch/arm/net/bpf_jit_32.c > > > @@ -1999,7 +1999,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) > > > } > > > flush_icache_range((u32)header, (u32)(ctx.target + ctx.idx)); > > > > > > - if (bpf_jit_enable > 1) > > > + if (READ_ONCE(bpf_jit_enable) > 1) > > > > Nack. > > Even if the compiler decides to use single byte loads for some > > odd reason there is no issue here. > > I see, and same for 2nd/3rd patches, right? > > Then how about this part? > It's not data-race nor problematic in practice, but should the value be > consistent in the same function? > The 2nd/3rd patches also have this kind of part. The bof_jit_enable > 1 is unsupported and buggy. It will be removed eventually. Why are you doing these changes if they're not fixing any bugs ? Just to shut up some race sanitizer? > ---8<--- > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c > index 43e634126514..c71d1e94ee7e 100644 > --- a/arch/powerpc/net/bpf_jit_comp.c > +++ b/arch/powerpc/net/bpf_jit_comp.c > @@ -122,6 +122,7 @@ bool bpf_jit_needs_zext(void) > > struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) > { > + int jit_enable = READ_ONCE(bpf_jit_enable); > u32 proglen; > u32 alloclen; > u8 *image = NULL; > @@ -263,13 +264,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) > } > bpf_jit_build_epilogue(code_base, &cgctx); > > - if (bpf_jit_enable > 1) > + if (jit_enable > 1) > pr_info("Pass %d: shrink = %d, seen = 0x%x\n", pass, > proglen - (cgctx.idx * 4), cgctx.seen); > } > > skip_codegen_passes: > - if (bpf_jit_enable > 1) > + if (jit_enable > 1) > /* > * Note that we output the base address of the code_base > * rather than image, since opcodes are in code_base. > ---8<---