On Mon, 26 Apr 2021 19:47:42 +0800 Hangbin Liu <liuhangbin@xxxxxxxxx> wrote: > On Mon, Apr 26, 2021 at 07:40:28PM +0800, Hangbin Liu wrote: > > On Mon, Apr 26, 2021 at 11:53:50AM +0200, Jesper Dangaard Brouer wrote: > > > Decode: perf_trace_xdp_redirect_template+0xba > > > ./scripts/faddr2line vmlinux perf_trace_xdp_redirect_template+0xba > > > perf_trace_xdp_redirect_template+0xba/0x130: > > > perf_trace_xdp_redirect_template at include/trace/events/xdp.h:89 (discriminator 13) > > > > > > less -N net/core/filter.c > > > [...] > > > 3993 if (unlikely(err)) > > > 3994 goto err; > > > 3995 > > > -> 3996 _trace_xdp_redirect_map(dev, xdp_prog, fwd, map_type, map_id, ri->tgt_index); > > > > Oh, the fwd in xdp xdp_redirect_map broadcast is NULL... > > > > I will see how to fix it. Maybe assign the ingress interface to fwd? > > Er, sorry for the flood message. I just checked the trace point code, fwd > in xdp trace event means to_ifindex. So we can't assign the ingress interface > to fwd. > > In xdp_redirect_map broadcast case, there is no specific to_ifindex. > So how about just ignore it... e.g. Yes, below code make sense, and I want to confirm that it solves the crash (I tested it). IMHO leaving ifindex=0 is okay, because it is not a valid ifindex, meaning a caller of the tracepoint can deduce (together with the map types) that this must be a broadcast. Thank you Hangbin for keep working on this patchset. I know it have been a long long road. I truly appreciate your perseverance and patience with this patchset. With this crash fixed, I actually think we are very close to having something we can merge. With the unlikely() I'm fine with the code itself. I think we need to update the patch description, but I've asked Toke to help with this. The performance measurements in the patch description is not measuring what I expected, but something else. To avoid redoing a lot of testing, I think we can just describe what the test 'redirect_map-multi i40e->i40e' is doing, as broadcast feature is filtering the ingress port 'i40e->i40e' test out same interface will just drop the xdp_frame (after walking the devmap for empty ports). Or maybe it is not the same interface(?). In any-case this need to be more clear. I think it would be valuable to show (in the commit message) some tests that demonstrates the overhead of packet cloning. I expect the overhead of page-alloc+memcpy is to be significant, but Lorenzo have a number of ideas howto speed this up. Maybe you can simply broadcast-redirect into multiple veth devices that (XDP_DROP in peer-dev) to demonstrate the effect and overhead of doing the cloning process. > diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h > index fcad3645a70b..1751da079330 100644 > --- a/include/trace/events/xdp.h > +++ b/include/trace/events/xdp.h > @@ -110,7 +110,8 @@ DECLARE_EVENT_CLASS(xdp_redirect_template, > u32 ifindex = 0, map_index = index; > > if (map_type == BPF_MAP_TYPE_DEVMAP || map_type == BPF_MAP_TYPE_DEVMAP_HASH) { > - ifindex = ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex; > + if (tgt) > + ifindex = ((struct _bpf_dtab_netdev *)tgt)->dev->ifindex; > } else if (map_type == BPF_MAP_TYPE_UNSPEC && map_id == INT_MAX) { > ifindex = index; > map_index = 0; > > > Hangbin > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer