> On Jan 9, 2023, at 4:15 PM, Christophe Leroy <christophe.leroy@xxxxxxxxxx> wrote: > > > > Le 06/01/2023 à 16:37, Daniel Borkmann a écrit : >> On 1/5/23 6:53 PM, Christophe Leroy wrote: >>> Le 05/01/2023 à 04:06, tong@xxxxxxxxxxxxx a écrit : >>>> From: Tonghao Zhang <tong@xxxxxxxxxxxxx> >>>> >>>> The x86_64 can't dump the valid insn in this way. A test BPF prog >>>> which include subprog: >>>> >>>> $ llvm-objdump -d subprog.o >>>> Disassembly of section .text: >>>> 0000000000000000 <subprog>: >>>> 0: 18 01 00 00 73 75 62 70 00 00 00 00 72 6f 67 00 r1 >>>> = 29114459903653235 ll >>>> 2: 7b 1a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r1 >>>> 3: bf a1 00 00 00 00 00 00 r1 = r10 >>>> 4: 07 01 00 00 f8 ff ff ff r1 += -8 >>>> 5: b7 02 00 00 08 00 00 00 r2 = 8 >>>> 6: 85 00 00 00 06 00 00 00 call 6 >>>> 7: 95 00 00 00 00 00 00 00 exit >>>> Disassembly of section raw_tp/sys_enter: >>>> 0000000000000000 <entry>: >>>> 0: 85 10 00 00 ff ff ff ff call -1 >>>> 1: b7 00 00 00 00 00 00 00 r0 = 0 >>>> 2: 95 00 00 00 00 00 00 00 exit >>>> >>>> kernel print message: >>>> [ 580.775387] flen=8 proglen=51 pass=3 image=ffffffffa000c20c >>>> from=kprobe-load pid=1643 >>>> [ 580.777236] JIT code: 00000000: cc cc cc cc cc cc cc cc cc cc cc >>>> cc cc cc cc cc >>>> [ 580.779037] JIT code: 00000010: cc cc cc cc cc cc cc cc cc cc cc >>>> cc cc cc cc cc >>>> [ 580.780767] JIT code: 00000020: cc cc cc cc cc cc cc cc cc cc cc >>>> cc cc cc cc cc >>>> [ 580.782568] JIT code: 00000030: cc cc cc >>>> >>>> $ bpf_jit_disasm >>>> 51 bytes emitted from JIT compiler (pass:3, flen:8) >>>> ffffffffa000c20c + <x>: >>>> 0: int3 >>>> 1: int3 >>>> 2: int3 >>>> 3: int3 >>>> 4: int3 >>>> 5: int3 >>>> ... >>>> >>>> Until bpf_jit_binary_pack_finalize is invoked, we copy rw_header to >>>> header >>>> and then image/insn is valid. BTW, we can use the "bpftool prog dump" >>>> JITed instructions. >>> >>> NACK. >>> >>> Because the feature is buggy on x86_64, you remove it for all >>> architectures ? >>> >>> On powerpc bpf_jit_enable == 2 works and is very usefull. >>> >>> Last time I tried to use bpftool on powerpc/32 it didn't work. I don't >>> remember the details, I think it was an issue with endianess. Maybe it >>> is fixed now, but it needs to be verified. >>> >>> So please, before removing a working and usefull feature, make sure >>> there is an alternative available to it for all architectures in all >>> configurations. >>> >>> Also, I don't think bpftool is usable to dump kernel BPF selftests. >>> That's vital when a selftest fails if you want to have a chance to >>> understand why it fails. >> >> If this is actively used by JIT developers and considered useful, I'd be >> ok to leave it for the time being. Overall goal is to reach feature parity >> among (at least major arch) JITs and not just have most functionality only >> available on x86-64 JIT. Could you however check what is not working with >> bpftool on powerpc/32? Perhaps it's not too much effort to just fix it, >> but details would be useful otherwise 'it didn't work' is too fuzzy. > > Sure I will try to test bpftool again in the coming days. > > Previous discussion about that subject is here: > https://patchwork.kernel.org/project/linux-riscv/patch/20210415093250.3391257-1-Jianlin.Lv@xxxxxxx/#24176847= Hi Christophe Any progress? We discuss to deprecate the bpf_jit_enable == 2 in 2021, but bpftool can not run on powerpc. Now can we fix this issue? > >> >> Also, with regards to the last statement that bpftool is not usable to >> dump kernel BPF selftests. Could you elaborate some more? I haven't used >> bpf_jit_enable == 2 in a long time and for debugging always relied on >> bpftool to dump xlated insns or JIT. Or do you mean by BPF selftests >> the test_bpf.ko module? Given it has a big batch with kernel-only tests, >> there I can see it's probably still useful. > > Yes I mean test_bpf.ko > > I used it as the test basis when I implemented eBPF for powerpc/32. And > not so long ago it helped decover and fix a bug, see > https://github.com/torvalds/linux/commit/89d21e259a94f7d5582ec675aa445f5a79f347e4 > >> >> Cheers, >> Daniel