On 7/10/20 9:02 AM, Daniel Borkmann wrote: > Right, but what about the other direction where one device forwards to a > bond, > presumably eth1 + eth2 are in the include map and shared also between other > ifaces? Given the logic for the bond mode is on bond0, so one layer > higher, how > do you determine which of eth1 + eth2 to send to in the BPF prog? Daemon > listening > for link events via arp or mii monitor and then update include map? > Ideally would > be nice to have some sort of a bond0 pass-through for the XDP buffer so > it ends > up eventually at one of the two through the native logic, e.g. what do > you do when > it's configured in xor mode or when slave dev is selected via hash or > some other > user logic (e.g. via team driver); how would this be modeled via > inclusion map? I > guess the issue can be regarded independently to this set, but given you > mention > explicitly bond here as a use case for the exclusion map, I was > wondering how you > solve the inclusion one for bond devices for your data plane? bond driver does not support xdp_xmit, and I do not believe there is a good ROI for adapting it to handle xdp buffers. For round robin and active-backup modes it is straightforward to adapt the new ndo_get_xmit_slave to work with ebpf. That is not the case for any of them that use a hash on the skb. e.g., for L3+L4 hashing I found it easier to replicate the algorithm in bpf than trying to adapt the bond code to work with XDP buffers. I put that in the category of 'XDP is advanced networking that requires unraveling the generic for a specific deployment.' In short, for bonds and Tx the bpf program needs to pick the slave device.