On Wed, Sep 30, 2020 at 11:01:45PM +0200, Daniel Borkmann wrote: > On 9/30/20 9:20 PM, Alexei Starovoitov wrote: > > On Wed, Sep 30, 2020 at 05:18:20PM +0200, Daniel Borkmann wrote: > > > + > > > +#ifndef barrier_data > > > +# define barrier_data(ptr) asm volatile("": :"r"(ptr) :"memory") > > > +#endif > > > + > > > +#ifndef ctx_ptr > > > +# define ctx_ptr(field) (void *)(long)(field) > > > +#endif > > > > > +static __always_inline bool is_remote_ep_v4(struct __sk_buff *skb, > > > + __be32 addr) > > > +{ > > > + void *data_end = ctx_ptr(skb->data_end); > > > + void *data = ctx_ptr(skb->data); > > > > please consider adding: > > __bpf_md_ptr(void *, data); > > __bpf_md_ptr(void *, data_end); > > to struct __sk_buff in a followup to avoid this casting headache. > > You mean also for the other ctx types? I can take a look, yeah. I mean we can add two new fields to __sk_buff with proper 'void *' type and rename the old ones: struct __sk_buff { ... - u32 data; + u32 data_deprecated; - u32 data_end; + u32 data_end_deprecated; ... + __bpf_md_ptr(void *, data); + __bpf_md_ptr(void *, data_end); }; All existing progs will compile fine because they do type cast anyway, but new progs wouldn't need to do the cast anymore. It will solve some llvm headaches due to 32-bit load too. Or we can introduce two new fields with new names. > Yeah, so the barrier_data() was to avoid compiler to optimize, and the bpf_ntohl() > to load target ifindex which was stored in big endian. Thanks for applying the set, > I'll look into reworking this to have a loader application w/ the global data and > then to pin it and have iproute2 pick this up from the pinned location, for example > (or directly interact with netlink wrt attaching ... I'll see which is better). Thanks! Appreciate it.