On 11/12/24 5:13 PM, Alexei Starovoitov wrote:
On Tue, Nov 12, 2024 at 8:41 AM Yonghong Song <yonghong.song@xxxxxxxxx> wrote:
+
+static void priv_stack_check_guard(void __percpu *priv_stack_ptr, int alloc_size,
+ struct bpf_prog *prog)
+{
+ int cpu, underflow_idx = (alloc_size - PRIV_STACK_GUARD_SZ) >> 3;
+ u64 *stack_ptr;
+
+ for_each_possible_cpu(cpu) {
+ stack_ptr = per_cpu_ptr(priv_stack_ptr, cpu);
+ if (stack_ptr[0] != PRIV_STACK_GUARD_VAL ||
+ stack_ptr[underflow_idx] != PRIV_STACK_GUARD_VAL) {
+ pr_err("BPF private stack overflow/underflow detected for prog %sx\n",
+ bpf_get_prog_name(prog));
+ break;
+ }
+ }
+}
I was tempted to change pr_err() to WARN() to make sure this kinda bug
is very obvious, but left it as-is.
I think kasan-ing JITed load/stores and adding poison to guards
will be a bigger win.
The bpf prog/verifier bug will be spotted right away instead of
later during jit_free.
Agree. I will work on this as a follow-up.