This is a note to let you know that I've just added the patch titled bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: bpf-fix-precision-tracking-for-bpf_alu-bpf_to_be-bpf_end.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 291d044fd51f8484066300ee42afecf8c8db7b3a Mon Sep 17 00:00:00 2001 From: Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx> Date: Thu, 2 Nov 2023 13:39:03 +0800 Subject: bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx> commit 291d044fd51f8484066300ee42afecf8c8db7b3a upstream. BPF_END and BPF_NEG has a different specification for the source bit in the opcode compared to other ALU/ALU64 instructions, and is either reserved or use to specify the byte swap endianness. In both cases the source bit does not encode source operand location, and src_reg is a reserved field. backtrack_insn() currently does not differentiate BPF_END and BPF_NEG from other ALU/ALU64 instructions, which leads to r0 being incorrectly marked as precise when processing BPF_ALU | BPF_TO_BE | BPF_END instructions. This commit teaches backtrack_insn() to correctly mark precision for such case. While precise tracking of BPF_NEG and other BPF_END instructions are correct and does not need fixing, this commit opt to process all BPF_NEG and BPF_END instructions within the same if-clause to better align with current convention used in the verifier (e.g. check_alu_op). Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking") Cc: stable@xxxxxxxxxxxxxxx Reported-by: Mohamed Mahmoud <mmahmoud@xxxxxxxxxx> Closes: https://lore.kernel.org/r/87jzrrwptf.fsf@xxxxxxx Tested-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx> Tested-by: Tao Lyu <tao.lyu@xxxxxxx> Acked-by: Eduard Zingerman <eddyz87@xxxxxxxxx> Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx> Link: https://lore.kernel.org/r/20231102053913.12004-2-shung-hsi.yu@xxxxxxxx Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/bpf/verifier.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2596,7 +2596,12 @@ static int backtrack_insn(struct bpf_ver if (class == BPF_ALU || class == BPF_ALU64) { if (!(*reg_mask & dreg)) return 0; - if (opcode == BPF_MOV) { + if (opcode == BPF_END || opcode == BPF_NEG) { + /* sreg is reserved and unused + * dreg still need precision before this insn + */ + return 0; + } else if (opcode == BPF_MOV) { if (BPF_SRC(insn->code) == BPF_X) { /* dreg = sreg * dreg needs precision after this insn Patches currently in stable-queue which might be from shung-hsi.yu@xxxxxxxx are queue-6.1/bpf-fix-check_stack_write_fixed_off-to-correctly-spill-imm.patch queue-6.1/bpf-fix-precision-tracking-for-bpf_alu-bpf_to_be-bpf_end.patch