From: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Date: Tue, 23 Aug 2022 23:20:29 +0200 > On 8/23/22 8:12 PM, Kuniyuki Iwashima wrote: > > While reading bpf_jit_limit, it can be changed concurrently. > > Thus, we need to add READ_ONCE() to its reader. > > For sake of a better/clearer commit message, please also provide data about the > WRITE_ONCE() pairing that this READ_ONCE() targets. This seems to be the case in > __do_proc_doulongvec_minmax() as far as I can see. For your 2nd sentence above > please also include load-tearing as main motivation for your fix. I'll add better description. Thank you! > > > Fixes: ede95a63b5e8 ("bpf: add bpf_jit_limit knob to restrict unpriv allocations") > > Signed-off-by: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx> > > --- > > v2: > > * Drop other 3 patches (No change for this patch) > > > > v1: https://lore.kernel.org/bpf/20220818042339.82992-1-kuniyu@xxxxxxxxxx/ > > --- > > kernel/bpf/core.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > > index c1e10d088dbb..3d9eb3ae334c 100644 > > --- a/kernel/bpf/core.c > > +++ b/kernel/bpf/core.c > > @@ -971,7 +971,7 @@ pure_initcall(bpf_jit_charge_init); > > > > int bpf_jit_charge_modmem(u32 size) > > { > > - if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) { > > + if (atomic_long_add_return(size, &bpf_jit_current) > READ_ONCE(bpf_jit_limit)) { > > if (!bpf_capable()) { > > atomic_long_sub(size, &bpf_jit_current); > > return -EPERM; > >