On Sun, Dec 22, 2019 at 11:48 PM Yonghong Song <yhs@xxxxxx> wrote: > > > > On 12/22/19 10:18 PM, Andrii Nakryiko wrote: > > Clang patch [0] enables emitting relocatable generic ALU/ALU64 instructions > > (i.e, shifts and arithmetic operations), as well as generic load/store > > instructions. The former ones are already supported by libbpf as is. This > > patch adds further support for load/store instructions. Relocatable field > > offset is encoded in BPF instruction's 16-bit offset section and are adjusted > > by libbpf based on target kernel BTF. > > > > These Clang changes and corresponding libbpf changes allow for more succinct > > generated BPF code by encoding relocatable field reads as a single > > LD/ST/LDX/STX instruction. It also enables relocatable access to BPF context. > > Previously, if context struct (e.g., __sk_buff) was accessed with CO-RE > > relocations (e.g., due to preserve_access_index attribute), it would be > > rejected by BPF verifier due to modified context pointer dereference. With > > Clang patch, such context accesses are both relocatable and have a fixed > > offset from the point of view of BPF verifier. > > > > [0] https://reviews.llvm.org/D71790 > > > > Signed-off-by: Andrii Nakryiko <andriin@xxxxxx> > > --- > > tools/lib/bpf/libbpf.c | 32 +++++++++++++++++++++++++++++--- > > 1 file changed, 29 insertions(+), 3 deletions(-) > > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c > > index 9576a90c5a1c..2dbc2204a02c 100644 > > --- a/tools/lib/bpf/libbpf.c > > +++ b/tools/lib/bpf/libbpf.c > > @@ -18,6 +18,7 @@ > > #include <stdarg.h> > > #include <libgen.h> > > #include <inttypes.h> > > +#include <limits.h> > > #include <string.h> > > #include <unistd.h> > > #include <endian.h> > > @@ -3810,11 +3811,13 @@ static int bpf_core_reloc_insn(struct bpf_program *prog, > > insn = &prog->insns[insn_idx]; > > class = BPF_CLASS(insn->code); > > > > - if (class == BPF_ALU || class == BPF_ALU64) { > > + switch (class) { > > + case BPF_ALU: > > + case BPF_ALU64: > > if (BPF_SRC(insn->code) != BPF_K) > > return -EINVAL; > > if (!failed && validate && insn->imm != orig_val) { > > - pr_warn("prog '%s': unexpected insn #%d value: got %u, exp %u -> %u\n", > > + pr_warn("prog '%s': unexpected insn #%d (ALU/ALU64) value: got %u, exp %u -> %u\n", > > bpf_program__title(prog, false), insn_idx, > > insn->imm, orig_val, new_val); > > return -EINVAL; > > @@ -3824,7 +3827,30 @@ static int bpf_core_reloc_insn(struct bpf_program *prog, > > pr_debug("prog '%s': patched insn #%d (ALU/ALU64)%s imm %u -> %u\n", > > bpf_program__title(prog, false), insn_idx, > > failed ? " w/ failed reloc" : "", orig_val, new_val); > > - } else { > > + break; > > + case BPF_LD: > > Maybe we should remove BPF_LD here? BPF_LD is used for ld_imm64, ld_abs > and ld_ind, where the insn->off = 0 and not really used. Sure, I can drop BPF_LD case. Will send v2 soon. > > + case BPF_LDX: > > + case BPF_ST: > > + case BPF_STX: > + if (!failed && validate && insn->off != orig_val) { > > + pr_warn("prog '%s': unexpected insn #%d (LD/LDX/ST/STX) value: got %u, exp %u -> %u\n", > > + bpf_program__title(prog, false), insn_idx, > > + insn->off, orig_val, new_val); > > + return -EINVAL; > > + } > > + if (new_val > SHRT_MAX) { > > + pr_warn("prog '%s': insn #%d (LD/LDX/ST/STX) value too big: %u\n", > > + bpf_program__title(prog, false), insn_idx, > > + new_val); > > + return -ERANGE; > > + } > > + orig_val = insn->off; > > + insn->off = new_val; > > + pr_debug("prog '%s': patched insn #%d (LD/LDX/ST/STX)%s off %u -> %u\n", > > + bpf_program__title(prog, false), insn_idx, > > + failed ? " w/ failed reloc" : "", orig_val, new_val); > > + break; > > + default: > > pr_warn("prog '%s': trying to relocate unrecognized insn #%d, code:%x, src:%x, dst:%x, off:%x, imm:%x\n", > > bpf_program__title(prog, false), > > insn_idx, insn->code, insn->src_reg, insn->dst_reg, > >