Re: [PATCH bpf-next v1 5/8] libbpf: Support opening bpf objects of either endianness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 21, 2024 at 06:55:58PM -0700, Alexei Starovoitov wrote:
> On Wed, Aug 21, 2024 at 2:10 AM Tony Ambardar <tony.ambardar@xxxxxxxxx> wrote:
> >
> >
> > +static inline void bpf_insn_bswap(struct bpf_insn *insn)
> > +{
> > +       /* dst_reg & src_reg nibbles */
> > +       __u8 *regs = (__u8 *)insn + offsetofend(struct bpf_insn, code);
> > +
> > +       *regs = (*regs >> 4) | (*regs << 4);
> > +       insn->off = bswap_16(insn->off);
> > +       insn->imm = bswap_32(insn->imm);
> > +}
> 
> This is really great!
> Thank you for working on it.

Happy to help! The endian restrictions were a long-time annoyance for me.

> 
> This idea was brought up a couple times, since folks want to compile
> bpf prog once, embed it into their user space binary,
> and auto adjust to target endianness.
> Cross compilation isn't important to them,
> but the ability to embed a single .o instead of two .o-s is a big win.

Ah, interesting use case. I hadn't really considered that or tested it.
I suppose .symtab and .rel* have ELF types so OK, .strtab doesn't matter,
and now we have BTF/BTF.ext converters, so why not? Something like light
skeleton might be a problem though, because the data blob is
heterogeneous and would be hard to convert byte-order after writing.

> 
> It's great that the above insn, elf and btf adjustments are working.
> Since endianness is encoded in elf what's the point of
> extra btf_ext__endianness libbpf api?
> Aren't elf and btf.ext suppose to be in the same endianness all the time?

I implemented BTF.ext following the BTF endianness API example, which
handles raw BTF, in-memory, and not just ELF object files. With BTF,
we have API clients like pahole, but only internal usage so far for
BTF.ext, and no notion of "raw" BTF.ext. I suppose exposing an API
for btf_ext__endianness isn't strictly needed right now, but I can
imagine BTF-processing clients using it. What are your thoughts, Andrii?

BTW, I just fixed a bug in my light skeleton code that made test_progs
'map_ptr' fail, so will be sending out a v2 patch.

Currently, I have only 2 unexpected test failures on s390x:

subtest_userns:PASS:socketpair 0 nsec
subtest_userns:PASS:fork 0 nsec
recvfd:PASS:recvmsg 0 nsec
recvfd:PASS:cmsg_null 0 nsec
recvfd:PASS:cmsg_len 0 nsec
recvfd:PASS:cmsg_level 0 nsec
recvfd:PASS:cmsg_type 0 nsec
parent:PASS:recv_bpffs_fd 0 nsec
materialize_bpffs_fd:PASS:fs_cfg_cmds 0 nsec
materialize_bpffs_fd:PASS:fs_cfg_maps 0 nsec
materialize_bpffs_fd:PASS:fs_cfg_progs 0 nsec
materialize_bpffs_fd:PASS:fs_cfg_attachs 0 nsec
parent:PASS:materialize_bpffs_fd 0 nsec
sendfd:PASS:sendmsg 0 nsec
parent:PASS:send_mnt_fd 0 nsec
recvfd:PASS:recvmsg 0 nsec
recvfd:PASS:cmsg_null 0 nsec
recvfd:PASS:cmsg_len 0 nsec
recvfd:PASS:cmsg_level 0 nsec
recvfd:PASS:cmsg_type 0 nsec
parent:PASS:recv_token_fd 0 nsec
parent:FAIL:waitpid_child unexpected error: 22 (errno 3)
#402/9   token/obj_priv_implicit_token_envvar:FAIL

and

libbpf: prog 'on_event': BPF program load failed: Bad address
libbpf: prog 'on_event': -- BEGIN PROG LOAD LOG --
The sequence of 8193 jumps is too complex.
verification time 2633000 usec
stack depth 360
processed 116096 insns (limit 1000000) max_states_per_insn 1 total_states 5061 peak_states 5061 mark_read 2540
-- END PROG LOAD LOG --
libbpf: prog 'on_event': failed to load: -14
libbpf: failed to load object 'pyperf600.bpf.o'
scale_test:FAIL:expect_success unexpected error: -14 (errno 14)
#525     verif_scale_pyperf600:FAIL


I'd appreciate any thoughts on troubleshooting these, and will continue
looking into them.

Cheers,
Tony




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux