Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@xxxxxxxxxx> wrote:
> 
> Introduction of bounded loops exposed old bug in x64 JIT.
> JIT maintains the array of offsets to the end of all instructions to
> compute jmp offsets.
> addrs[0] - offset of the end of the 1st insn (that includes prologue).
> addrs[1] - offset of the end of the 2nd insn.
> JIT didn't keep the offset of the beginning of the 1st insn,
> since classic BPF didn't have backward jumps and valid extended BPF
> couldn't have a branch to 1st insn, because it didn't allow loops.
> With bounded loops it's possible to construct a valid program that
> jumps backwards to the 1st insn.
> Fix JIT by computing:
> addrs[0] - offset of the end of prologue == start of the 1st insn.
> addrs[1] - offset of the end of 1st insn.
> 
> Reported-by: syzbot+35101610ff3e83119b1b@xxxxxxxxxxxxxxxxxxxxxxxxx
> Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>

Acked-by: Song Liu <songliubraving@xxxxxx>

Do we need similar fix for x86_32? 

Thanks,
Song

> ---
> arch/x86/net/bpf_jit_comp.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index eaaed5bfc4a4..a56c95805732 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -390,8 +390,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
> 
> 	emit_prologue(&prog, bpf_prog->aux->stack_depth,
> 		      bpf_prog_was_classic(bpf_prog));
> +	addrs[0] = prog - temp;
> 
> -	for (i = 0; i < insn_cnt; i++, insn++) {
> +	for (i = 1; i <= insn_cnt; i++, insn++) {
> 		const s32 imm32 = insn->imm;
> 		u32 dst_reg = insn->dst_reg;
> 		u32 src_reg = insn->src_reg;
> @@ -1105,7 +1106,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> 		extra_pass = true;
> 		goto skip_init_addrs;
> 	}
> -	addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
> +	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
> 	if (!addrs) {
> 		prog = orig_prog;
> 		goto out_addrs;
> @@ -1115,7 +1116,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> 	 * Before first pass, make a rough estimation of addrs[]
> 	 * each BPF instruction is translated to less than 64 bytes
> 	 */
> -	for (proglen = 0, i = 0; i < prog->len; i++) {
> +	for (proglen = 0, i = 0; i <= prog->len; i++) {
> 		proglen += 64;
> 		addrs[i] = proglen;
> 	}
> -- 
> 2.20.0
> 





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux