Re: Are BPF tail calls only supposed to work with pinned maps?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 26, 2019 at 03:12:30PM +0200, Toke Høiland-Jørgensen wrote:
> Daniel Borkmann <daniel@xxxxxxxxxxxxx> writes:
> > On Thu, Sep 26, 2019 at 01:23:38PM +0200, Toke Høiland-Jørgensen wrote:
> > [...]
> >> While working on a prototype of the XDP chain call feature, I ran into
> >> some strange behaviour with tail calls: If I create a userspace program
> >> that loads two XDP programs, one of which tail calls the other, the tail
> >> call map would appear to be empty even though the userspace program
> >> populates it as part of the program loading.
> >> 
> >> I eventually tracked this down to this commit:
> >> c9da161c6517 ("bpf: fix clearing on persistent program array maps")
> >
> > Correct.
> >
> >> Which clears PROG_ARRAY maps whenever the last uref to it disappears
> >> (which it does when my loader exits after attaching the XDP program).
> >> 
> >> This effectively means that tail calls only work if the PROG_ARRAY map
> >> is pinned (or the process creating it keeps running). And as far as I
> >> can tell, the inner_map reference in bpf_map_fd_get_ptr() doesn't bump
> >> the uref either, so presumably if one were to create a map-in-map
> >> construct with tail call pointer in the inner map(s), each inner map
> >> would also need to be pinned (haven't tested this case)?
> >
> > There is no map in map support for tail calls today.
> 
> Not directly, but can't a program do:
> 
> tail_call_map = bpf_map_lookup(outer_map, key);
> bpf_tail_call(tail_call_map, idx);

Nope, that is what I meant, bpf_map_meta_alloc() will bail out in that
case.

> >> Is this really how things are supposed to work? From an XDP use case PoV
> >> this seems somewhat surprising...
> >> 
> >> Or am I missing something obvious here?
> >
> > The way it was done like this back then was in order to break up cyclic
> > dependencies as otherwise the programs and maps involved would never get
> > freed as they reference themselves and live on in the kernel forever
> > consuming potentially large amount of resources, so orchestration tools
> > like Cilium typically just pin the maps in bpf fs (like most other maps
> > it uses and accesses from agent side) in order to up/downgrade the agent
> > while keeping BPF datapath intact.
> 
> Right. I can see how the cyclic reference thing gets thorny otherwise.
> However, the behaviour was somewhat surprising to me; is it documented
> anywhere?

Haven't updated the BPF guide in a while [0], I don't think I documented
this detail back then, so right now only in the git log. Improvements to
the reference guide definitely welcome.

Thanks,
Daniel

  [0] https://cilium.readthedocs.io/en/latest/bpf/
      https://github.com/cilium/cilium/blob/master/Documentation/bpf.rst



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux