+Petr, Nik On Fri, Nov 26, 2021 at 11:14:31AM -0800, Jakub Kicinski wrote: > On Fri, 26 Nov 2021 19:47:17 +0100 Toke Høiland-Jørgensen wrote: > > > Fair. In all honesty I said that hoping to push for a more flexible > > > approach hidden entirely in BPF, and not involving driver changes. > > > Assuming the XDP program has more fine grained stats we should be able > > > to extract those instead of double-counting. Hence my vague "let's work > > > with apps" comment. > > > > > > For example to a person familiar with the workload it'd be useful to > > > know if program returned XDP_DROP because of configured policy or > > > failure to parse a packet. I don't think that sort distinction is > > > achievable at the level of standard stats. > > > > > > The information required by the admin is higher level. As you say the > > > primary concern there is "how many packets did XDP eat". > > > > Right, sure, I am also totally fine with having only a somewhat > > restricted subset of stats available at the interface level and make > > everything else be BPF-based. I'm hoping we can converge of a common > > understanding of what this "minimal set" should be :) > > > > > Speaking of which, one thing that badly needs clarification is our > > > expectation around XDP packets getting counted towards the interface > > > stats. > > > > Agreed. My immediate thought is that "XDP packets are interface packets" > > but that is certainly not what we do today, so not sure if changing it > > at this point would break things? > > I'd vote for taking the risk and trying to align all the drivers. I agree. I think IFLA_STATS64 in RTM_NEWLINK should contain statistics of all the packets seen by the netdev. The breakdown into software / hardware / XDP should be reported via RTM_NEWSTATS. Currently, for soft devices such as VLANs, bridges and GRE, user space only sees statistics of packets forwarded by software, which is quite useless when forwarding is offloaded from the kernel to hardware. Petr is working on exposing hardware statistics for such devices via rtnetlink. Unlike XDP (?), we need to be able to let user space enable / disable hardware statistics as we have a limited number of hardware counters and they can also reduce the bandwidth when enabled. We are thinking of adding a new RTM_SETSTATS for that: # ip stats set dev swp1 hw_stats on For query, something like (under discussion): # ip stats show dev swp1 // all groups # ip stats show dev swp1 group link # ip stats show dev swp1 group offload // all sub-groups # ip stats show dev swp1 group offload sub-group cpu # ip stats show dev swp1 group offload sub-group hw Like other iproute2 commands, these follow the nesting of the RTM_{NEW,GET}STATS uAPI. Looking at patch #1 [1], I think that whatever you decide to expose for XDP can be queried via: # ip stats show dev swp1 group xdp # ip stats show dev swp1 group xdp sub-group regular # ip stats show dev swp1 group xdp sub-group xsk Regardless, the following command should show statistics of all the packets seen by the netdev: # ip -s link show dev swp1 There is a PR [2] for node_exporter to use rtnetlink to fetch netdev statistics instead of the old proc interface. It should be possible to extend it to use RTM_*STATS for more fine-grained statistics. [1] https://lore.kernel.org/netdev/20211123163955.154512-2-alexandr.lobakin@xxxxxxxxx/ [2] https://github.com/prometheus/node_exporter/pull/2074