The patch below does not apply to the 4.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. To reproduce the conflict and resubmit, you may use the following commands: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-4.14.y git checkout FETCH_HEAD git cherry-pick -x 0613d8ca9ab382caabe9ed2dceb429e9781e443f # <resolve conflicts, build, test, etc.> git commit -s git send-email --to '<stable@xxxxxxxxxxxxxxx>' --in-reply-to '2023052830-mothproof-folic-5a0f@gregkh' --subject-prefix 'PATCH 4.14.y' HEAD^.. Possible dependencies: 0613d8ca9ab3 ("bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields") e2f7fc0ac695 ("bpf: fix undefined behavior in narrow load handling") 46f53a65d2de ("bpf: Allow narrow loads with offset > 0") bc23105ca0ab ("bpf: fix context access in tracing progs on 32 bit archs") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 0613d8ca9ab382caabe9ed2dceb429e9781e443f Mon Sep 17 00:00:00 2001 From: Will Deacon <will@xxxxxxxxxx> Date: Thu, 18 May 2023 11:25:28 +0100 Subject: [PATCH] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields A narrow load from a 64-bit context field results in a 64-bit load followed potentially by a 64-bit right-shift and then a bitwise AND operation to extract the relevant data. In the case of a 32-bit access, an immediate mask of 0xffffffff is used to construct a 64-bit BPP_AND operation which then sign-extends the mask value and effectively acts as a glorified no-op. For example: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) results in the following code generation for a 64-bit field: ldr x7, [x7] // 64-bit load mov x10, #0xffffffffffffffff and x7, x7, x10 Fix the mask generation so that narrow loads always perform a 32-bit AND operation: ldr x7, [x7] // 64-bit load mov w10, #0xffffffff and w7, w7, w10 Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Cc: John Fastabend <john.fastabend@xxxxxxxxx> Cc: Krzesimir Nowak <krzesimir@xxxxxxxxxx> Cc: Andrey Ignatov <rdna@xxxxxx> Acked-by: Yonghong Song <yhs@xxxxxx> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Will Deacon <will@xxxxxxxxxx> Link: https://lore.kernel.org/r/20230518102528.1341-1-will@xxxxxxxxxx Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fbcf5a4e2fcd..5871aa78d01a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, (1ULL << size * 8) - 1); } }