On 5/2/23 9:57 AM, Will Deacon wrote:
A narrow load from a 64-bit context field results in a 64-bit load followed potentially by a 64-bit right-shift and then a bitwise AND operation to extract the relevant data. In the case of a 32-bit access, an immediate mask of 0xffffffff is used to construct a 64-bit BPP_AND operation which then sign-extends the mask value and effectively acts as a glorified no-op. Fix the mask generation so that narrow loads always perform a 32-bit AND operation. Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Cc: John Fastabend <john.fastabend@xxxxxxxxx> Cc: Krzesimir Nowak <krzesimir@xxxxxxxxxx> Cc: Yonghong Song <yhs@xxxxxx> Cc: Andrey Ignatov <rdna@xxxxxx> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Will Deacon <will@xxxxxxxxxx>
Thanks for the fix! You didn't miss anything. It is a bug and we did not find it probably because user always use 'u64 val = ctx->u64_field' in their bpf code...
But I think the commit message can be improved. An example to show the difference without and with this patch can explain the issue much better. Acked-by: Yonghong Song <yhs@xxxxxx>
--- I spotted this while playing around with the JIT on arm64. I can't figure out why 31fd85816dbe special-cases 8-byte ctx fields in the first place, so I fear I may be missing something... kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fbcf5a4e2fcd..5871aa78d01a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, (1ULL << size * 8) - 1); } }