This is a note to let you know that I've just added the patch titled bpf: fix bpf_tail_call() x64 JIT to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: bpf-fix-bpf_tail_call-x64-jit.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From foo@baz Mon Jan 29 13:22:08 CET 2018 From: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Date: Mon, 29 Jan 2018 02:48:55 +0100 Subject: bpf: fix bpf_tail_call() x64 JIT To: gregkh@xxxxxxxxxxxxxxxxxxx Cc: ast@xxxxxxxxxx, stable@xxxxxxxxxxxxxxx, Alexei Starovoitov <ast@xxxxxx>, "David S . Miller" <davem@xxxxxxxxxxxxx> Message-ID: <b7bd813935a7bc6a5f4fe4a3f199034f571c9b70.1517190206.git.daniel@xxxxxxxxxxxxx> From: Alexei Starovoitov <ast@xxxxxx> [ upstream commit 90caccdd8cc0215705f18b92771b449b01e2474a ] - bpf prog_array just like all other types of bpf array accepts 32-bit index. Clarify that in the comment. - fix x64 JIT of bpf_tail_call which was incorrectly loading 8 instead of 4 bytes - tighten corresponding check in the interpreter to stay consistent The JIT bug can be triggered after introduction of BPF_F_NUMA_NODE flag in commit 96eabe7a40aa in 4.14. Before that the map_flags would stay zero and though JIT code is wrong it will check bounds correctly. Hence two fixes tags. All other JITs don't have this problem. Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Fixes: 96eabe7a40aa ("bpf: Allow selecting numa node during map creation") Fixes: b52f00e6a715 ("x86: bpf_jit: implement bpf_tail_call() helper") Acked-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Acked-by: Martin KaFai Lau <kafai@xxxxxx> Reviewed-by: Eric Dumazet <edumazet@xxxxxxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/net/bpf_jit_comp.c | 4 ++-- kernel/bpf/core.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -278,9 +278,9 @@ static void emit_bpf_tail_call(u8 **ppro /* if (index >= array->map.max_entries) * goto out; */ - EMIT4(0x48, 0x8B, 0x46, /* mov rax, qword ptr [rsi + 16] */ + EMIT2(0x89, 0xD2); /* mov edx, edx */ + EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */ offsetof(struct bpf_array, map.max_entries)); - EMIT3(0x48, 0x39, 0xD0); /* cmp rax, rdx */ #define OFFSET1 43 /* number of bytes to jump */ EMIT2(X86_JBE, OFFSET1); /* jbe out */ label1 = cnt; --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -715,7 +715,7 @@ select_insn: struct bpf_map *map = (struct bpf_map *) (unsigned long) BPF_R2; struct bpf_array *array = container_of(map, struct bpf_array, map); struct bpf_prog *prog; - u64 index = BPF_R3; + u32 index = BPF_R3; if (unlikely(index >= array->map.max_entries)) goto out; Patches currently in stable-queue which might be from daniel@xxxxxxxxxxxxx are queue-4.9/bpf-avoid-false-sharing-of-map-refcount-with-max_entries.patch queue-4.9/x86-bpf_jit-small-optimization-in-emit_bpf_tail_call.patch queue-4.9/bpf-reject-stores-into-ctx-via-st-and-xadd.patch queue-4.9/bpf-fix-32-bit-divide-by-zero.patch queue-4.9/bpf-fix-bpf_tail_call-x64-jit.patch queue-4.9/bpf-arsh-is-not-supported-in-32-bit-alu-thus-reject-it.patch queue-4.9/bpf-fix-divides-by-zero.patch queue-4.9/bpf-introduce-bpf_jit_always_on-config.patch