On Fri, May 13, 2022 at 8:12 PM Yonghong Song <yhs@xxxxxx> wrote: > > Currently, the 64bit relocation value in the instruction > is computed as follows: > __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32) > > Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1. > With the above computation, insn[0].imm will first sign-extend > to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF, > producing incorrect value 0xFFFFFFFF. The correct value > should be 0x1FFFFFFFF. > > Changing insn[0].imm to __u32 first will prevent 64bit sign > extension and fix the issue. Merging high and low 32bit values > also changed from '+' to '|' to be consistent with other > similar occurences in kernel and libbpf. > > Acked-by: Dave Marchevsky <davemarchevsky@xxxxxx> > Signed-off-by: Yonghong Song <yhs@xxxxxx> > --- Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx> > tools/lib/bpf/relo_core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c > index aea16343a8f1..78b16cda86fa 100644 > --- a/tools/lib/bpf/relo_core.c > +++ b/tools/lib/bpf/relo_core.c > @@ -1027,7 +1027,7 @@ int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn, > return -EINVAL; > } > > - imm = insn[0].imm + ((__u64)insn[1].imm << 32); > + imm = (__u32)insn[0].imm | ((__u64)insn[1].imm << 32); > if (res->validate && imm != orig_val) { > pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %llu -> %llu\n", > prog_name, relo_idx, > -- > 2.30.2 >