Re: [PATCH bpf 1/2] bpf: fix x64 JIT code generation for jmp to 1st insn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 31, 2019 at 8:43 PM Alexei Starovoitov
<alexei.starovoitov@xxxxxxxxx> wrote:
>
> On Wed, Jul 31, 2019 at 12:36 PM Song Liu <songliubraving@xxxxxx> wrote:
> >
> >
> >
> > > On Jul 30, 2019, at 6:38 PM, Alexei Starovoitov <ast@xxxxxxxxxx> wrote:
> > >
> > > Introduction of bounded loops exposed old bug in x64 JIT.
> > > JIT maintains the array of offsets to the end of all instructions to
> > > compute jmp offsets.
> > > addrs[0] - offset of the end of the 1st insn (that includes prologue).
> > > addrs[1] - offset of the end of the 2nd insn.
> > > JIT didn't keep the offset of the beginning of the 1st insn,
> > > since classic BPF didn't have backward jumps and valid extended BPF
> > > couldn't have a branch to 1st insn, because it didn't allow loops.
> > > With bounded loops it's possible to construct a valid program that
> > > jumps backwards to the 1st insn.
> > > Fix JIT by computing:
> > > addrs[0] - offset of the end of prologue == start of the 1st insn.
> > > addrs[1] - offset of the end of 1st insn.
> > >
> > > Reported-by: syzbot+35101610ff3e83119b1b@xxxxxxxxxxxxxxxxxxxxxxxxx
> > > Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
> > > Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
> > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
> >
> > Acked-by: Song Liu <songliubraving@xxxxxx>
> >
> > Do we need similar fix for x86_32?
>
> Right. x86_32 would need similar fix.
>
> Applied to bpf tree.

Yonghong noticed that it subtly changes jited linfo.
Surprisingly perf annotated output for source code in jited bpf progs
looks exactly the same for several large bpf progs that I've looked at.
This to be investigated later.

I've applied the fix:
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a56c95805732..991549a1c5f3 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1181,7 +1181,7 @@ struct bpf_prog *bpf_int_jit_compile(struct
bpf_prog *prog)

        if (!image || !prog->is_func || extra_pass) {
                if (image)
-                       bpf_prog_fill_jited_linfo(prog, addrs);
+                       bpf_prog_fill_jited_linfo(prog, addrs + 1);
 out_addrs:
                kfree(addrs);
                kfree(jit_data);
and re-pushed bpf tree.
The new commit is here:
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/commit/?id=7c2e988f400e83501e0a3568250780609b7c8263

Thanks Yonghong!

For bpf-next we need to figure out how to make test_btf more robust.
We can probably check first few insns for specific jited offsets,
but I don't yet see how to make it work for all archs.
And it will be annoying to keep it working with every change to jit.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux