On 11/30/23 17:32, Daniel Borkmann wrote:
On 11/30/23 2:55 PM, Toke Høiland-Jørgensen wrote:
Daniel Borkmann <daniel@xxxxxxxxxxxxx> writes
On 11/29/23 10:52 PM, Toke Høiland-Jørgensen wrote:
Edward Cree <ecree.xilinx@xxxxxxxxx> writes:
On 28/11/2023 14:39, Toke Høiland-Jørgensen wrote:
I'm not quite sure what should be the semantics of that, though.
I.e.,
if you are trying to aggregate two packets that have the flag set,
which
packet do you take the value from? What if only one packet has the
flag
It would probably make sense if both packets have it set.
Right, so "aggregate only if both packets have the flag set, keeping the
metadata area from the first packet", then?
Yes, sgtm.
Seems like a good default behavior: "keeping the metadata area from the
first packet".
(Please object if someone sees a issue for their use-case with this
default.)
set? Or should we instead have a "metadata_xdp_only" flag that just
prevents the skb metadata field from being set entirely?
What would be the use case compared to resetting meta data right before
we return with XDP_PASS?
I was thinking it could save a call to xdp_adjust_meta() to reset it
back to zero before PASSing the packet. But okay, that may be of
marginal utility.
Agree, feels too marginal.
I should explain our use-case(s) a bit more.
We do want the information to survive XDP_PASS into the SKB.
Its the hole point, as we want to transfer information from XDP layer to
TC-layer and perhaps further all the way to BPF socket filters (I even
heard someone asked for).
I'm trying to get an overview, as I now have multiple product teams that
want to store information across/into differ layer, and they have other
teams that consume this information.
We are exploring more options than only XDP metadata area to store
information. I have suggested that once an SKB have a socket
associated, then we can switch into using BPF local socket storage
tricks. (The lifetime of XDP metadata is not 100% clear as e.g.
pskb_expand_head clears it via skb_metadata_clear).
All ideas are welcome, e.g. I'm also looking at ability to store
auxiliary/metadata data associated with a dst_entry. And SKB->mark is
already used for other use-cases and isn't big enough. (and then there
is fun crossing a netns boundry).
Let me explain *one* of the concrete use-cases. As described in [1],
the CF XDP L4 load-balancer Unimog have been extended to a product
called Plurimog that does load-balancing across data-centers "colo's".
When Plurimog redirects to another colo, the original "landing" colo's
ID is carried across (in some encap header) to a Unimog instance. Thus,
the original landing Colo ID is known to Unimog running in another colo,
but that header is popped, so this info need to be transferred somehow.
I'm told that even the webserver/Nginx need to know the orig/foreign
landing colo ID (here there should be socket associated). For TCP SYN
packets, the layered DOS protecting also need to know foreign landing
colo ID. Other teams/products needs this for accounting, e.g. Traffic
Manager[1], Radar[2] and Capacity planning.
[1] https://blog.cloudflare.com/meet-traffic-manager/
[2] https://radar.cloudflare.com/
Sounds like what's actually needed is bpf progs inside the GRO engine
to implement the metadata "protocol" prepare and coalesce
callbacks?
Hmm, yes, I guess that would be the most general solution :)
Feels like a potential good fit, agree, although for just solving the
above sth not requiring extra BPF might be nice as well.
Yeah, I agree that just the flag makes sense on its own.
I've mentioned before (e.g. at NetConf) I would really like to see BPF
progs inside the GRO engine, but that is a larger project on its own.
I think it is worth doing eventually, but I likely need a solution to
unblock the "tracing"/debugging use-case, where someone added a
timestamp to XDP metadata and discovered GRO was not working.
I guess, we can do the Plurimog use-case now, as it should be stable for
packets belonging to the same (GRO) flow.
--Jesper