On Fri, Feb 28, 2025 at 8:29 AM Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> wrote: > > The verifier currently does not permit global subprog calls when a lock > is held, preemption is disabled, or when IRQs are disabled. This is > because we don't know whether the global subprog calls sleepable > functions or not. > > In case of locks, there's an additional reason: functions called by the > global subprog may hold additional locks etc. The verifier won't know > while verifying the global subprog whether it was called in context > where a spin lock is already held by the program. > > Perform summarization of the sleepable nature of a global subprog just > like changes_pkt_data and then allow calls to global subprogs for > non-sleepable ones from atomic context. > > While making this change, I noticed that RCU read sections had no > protection against sleepable global subprog calls, include it in the > checks and fix this while we're at it. > > Care needs to be taken to not allow global subprog calls when regular > bpf_spin_lock is held. When resilient spin locks is held, we want to > potentially have this check relaxed, but not for now. > > Tests are included in the next patch to handle all special conditions. > > Fixes: 9bb00b2895cb ("bpf: Add kfunc bpf_rcu_read_lock/unlock()") > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx> > --- > include/linux/bpf_verifier.h | 1 + > kernel/bpf/verifier.c | 50 ++++++++++++++++++++++++++---------- > 2 files changed, 37 insertions(+), 14 deletions(-) > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > index bbd013c38ff9..1b3cfa6cb720 100644 > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h > @@ -667,6 +667,7 @@ struct bpf_subprog_info { > /* true if bpf_fastcall stack region is used by functions that can't be inlined */ > bool keep_fastcall_stack: 1; > bool changes_pkt_data: 1; > + bool sleepable: 1; > > enum priv_stack_mode priv_stack_mode; > u8 arg_cnt; > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index dcd0da4e62fc..e3560d19d513 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -10317,23 +10317,18 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, > if (subprog_is_global(env, subprog)) { > const char *sub_name = subprog_name(env, subprog); > > - /* Only global subprogs cannot be called with a lock held. */ > if (env->cur_state->active_locks) { > verbose(env, "global function calls are not allowed while holding a lock,\n" > "use static function instead\n"); > return -EINVAL; > } > > - /* Only global subprogs cannot be called with preemption disabled. */ > - if (env->cur_state->active_preempt_locks) { > - verbose(env, "global function calls are not allowed with preemption disabled,\n" > - "use static function instead\n"); > - return -EINVAL; > - } > - > - if (env->cur_state->active_irq_id) { > - verbose(env, "global function calls are not allowed with IRQs disabled,\n" > - "use static function instead\n"); > + if (env->subprog_info[subprog].sleepable && > + (env->cur_state->active_rcu_lock || env->cur_state->active_preempt_locks || > + env->cur_state->active_irq_id || !in_sleepable(env))) { > + verbose(env, "global functions that may sleep are not allowed in non-sleepable context,\n" > + "i.e., in a RCU/IRQ/preempt-disabled section, or in\n" > + "a non-sleepable BPF program context\n"); > return -EINVAL; > } > > @@ -16703,6 +16698,14 @@ static void mark_subprog_changes_pkt_data(struct bpf_verifier_env *env, int off) > subprog->changes_pkt_data = true; > } > > +static void mark_subprog_sleepable(struct bpf_verifier_env *env, int off) > +{ > + struct bpf_subprog_info *subprog; > + > + subprog = find_containing_subprog(env, off); > + subprog->sleepable = true; > +} > + > /* 't' is an index of a call-site. > * 'w' is a callee entry point. > * Eventually this function would be called when env->cfg.insn_state[w] == EXPLORED. > @@ -16716,6 +16719,7 @@ static void merge_callee_effects(struct bpf_verifier_env *env, int t, int w) > caller = find_containing_subprog(env, t); > callee = find_containing_subprog(env, w); > caller->changes_pkt_data |= callee->changes_pkt_data; > + caller->sleepable |= callee->sleepable; > } > > /* non-recursive DFS pseudo code > @@ -17183,9 +17187,20 @@ static int visit_insn(int t, struct bpf_verifier_env *env) > mark_prune_point(env, t); > mark_jmp_point(env, t); > } > - if (bpf_helper_call(insn) && bpf_helper_changes_pkt_data(insn->imm)) > - mark_subprog_changes_pkt_data(env, t); > - if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { > + if (bpf_helper_call(insn)) { > + const struct bpf_func_proto *fp; > + > + ret = get_helper_proto(env, insn->imm, &fp); > + /* If called in a non-sleepable context program will be > + * rejected anyway, so we should end up with precise > + * sleepable marks on subprogs, except for dead code > + * elimination. TBH, I'm worried that we are regressing to doing all these side effect analyses disregarding dead code elimination. It's not something hypothetical to have an .rodata variable controlling whether, say, to do bpf_probe_read_user() (non-sleepable) vs bpf_copy_from_user() (sleepable) inside global subprog, depending on some outside configuration (e.g., whether we'll be doing SEC("iter.s/task") or it's actually profiler logic called inside SEC("perf_event"), all controlled by user-space). We do have use cases like this in production already, and this dead code elimination is important in such cases. Probably can be worked around with more global functions and stuff like that, but still, it's worrying we are giving up on such an important part of the BPF CO-RE approach - disabling parts of code "dynamically" before loading BPF programs. > + */ > + if (ret == 0 && fp->might_sleep) > + mark_subprog_sleepable(env, t); > + if (bpf_helper_changes_pkt_data(insn->imm)) > + mark_subprog_changes_pkt_data(env, t); > + } else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { > struct bpf_kfunc_call_arg_meta meta; > > ret = fetch_kfunc_meta(env, insn, &meta, NULL); > @@ -17204,6 +17219,13 @@ static int visit_insn(int t, struct bpf_verifier_env *env) > */ > mark_force_checkpoint(env, t); > } > + /* Same as helpers, if called in a non-sleepable context > + * program will be rejected anyway, so we should end up > + * with precise sleepable marks on subprogs, except for > + * dead code elimination. > + */ > + if (ret == 0 && is_kfunc_sleepable(&meta)) > + mark_subprog_sleepable(env, t); > } > return visit_func_call_insn(t, insns, env, insn->src_reg == BPF_PSEUDO_CALL); > > -- > 2.43.5 >