On Tue, Jul 14, 2020 at 11:12:59AM -0600, David Ahern wrote: > >> with pktgen(pkt size 64) to compire with xdp_redirect_map(). Here is the > >> test result(the veth peer has a dummy xdp program with XDP_DROP directly): > >> > >> Version | Test | Native | Generic > >> 5.8 rc1 | xdp_redirect_map i40e->i40e | 10.0M | 1.9M > >> 5.8 rc1 | xdp_redirect_map i40e->veth | 12.7M | 1.6M > >> 5.8 rc1 + patch | xdp_redirect_map i40e->i40e | 10.0M | 1.9M > >> 5.8 rc1 + patch | xdp_redirect_map i40e->veth | 12.3M | 1.6M > >> 5.8 rc1 + patch | xdp_redirect_map_multi i40e->i40e | 7.2M | 1.5M > >> 5.8 rc1 + patch | xdp_redirect_map_multi i40e->veth | 8.5M | 1.3M > >> 5.8 rc1 + patch | xdp_redirect_map_multi i40e->i40e+veth | 3.0M | 0.98M > >> > >> The bpf_redirect_map_multi() is slower than bpf_redirect_map() as we loop > >> the arrays and do clone skb/xdpf. The native path is slower than generic > >> path as we send skbs by pktgen. So the result looks reasonable. > >> > >> Last but not least, thanks a lot to Jiri, Eelco, Toke and Jesper for > >> suggestions and help on implementation. > >> > >> [0] https://xdp-project.net/#Handling-multicast > >> > >> v7: Fix helper flag check > >> Limit the *ex_map* to use DEVMAP_HASH only and update function > >> dev_in_exclude_map() to get better performance. > > > > Did it help? The performance numbers in the table above are the same as > > in v6... > > > > If there is only 1 entry in the exclude map, then the numbers should be > about the same. Yes, I didn't re-run the test. Because when do the testing, I use null exclude map + flag BPF_F_EXCLUDE_INGRESS. So the perf number should have no difference with last patch. Thanks Hangbin