Verifier treats bpf_sk_lookup.remote_port as a 32-bit field for backward compatibility, regardless of what the uapi headers say. This field is mapped onto the 16-bit bpf_sk_lookup_kern.sport field. Therefore, accessing the most significant 16 bits of bpf_sk_lookup.remote_port must produce 0, which is currently not the case. The problem is that narrow loads with offset - commit 46f53a65d2de ("bpf: Allow narrow loads with offset > 0"), don't play nicely with the masking optimization - commit 239946314e57 ("bpf: possibly avoid extra masking for narrower load in verifier"). In particular, when we suppress extra masking, we suppress shifting as well, which is not correct. Fix by moving the masking suppression check to BPF_AND generation. Fixes: 46f53a65d2de ("bpf: Allow narrow loads with offset > 0") Signed-off-by: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> --- kernel/bpf/verifier.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d7473fee247c..195f2e9b5a47 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12848,7 +12848,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) return -EINVAL; } - if (is_narrower_load && size < target_size) { + if (is_narrower_load) { u8 shift = bpf_ctx_narrow_access_offset( off, size, size_default) * 8; if (shift && cnt + 1 >= ARRAY_SIZE(insn_buf)) { @@ -12860,15 +12860,19 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, - (1 << size * 8) - 1); + if (size < target_size) + insn_buf[cnt++] = BPF_ALU32_IMM( + BPF_AND, insn->dst_reg, + (1 << size * 8) - 1); } else { if (shift) insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, - (1ULL << size * 8) - 1); + if (size < target_size) + insn_buf[cnt++] = BPF_ALU64_IMM( + BPF_AND, insn->dst_reg, + (1ULL << size * 8) - 1); } } -- 2.34.1