> -----Original Message----- > From: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > Sent: Wednesday, April 8, 2020 7:52 AM > To: sameehj@xxxxxxxxxx > Cc: Wei Liu <wei.liu@xxxxxxxxxx>; KY Srinivasan <kys@xxxxxxxxxxxxx>; > Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>; Stephen Hemminger > <sthemmin@xxxxxxxxxxxxx>; Jesper Dangaard Brouer > <brouer@xxxxxxxxxx>; netdev@xxxxxxxxxxxxxxx; bpf@xxxxxxxxxxxxxxx; > zorik@xxxxxxxxxx; akiyano@xxxxxxxxxx; gtzalik@xxxxxxxxxx; Toke > Høiland-Jørgensen <toke@xxxxxxxxxx>; Daniel Borkmann > <borkmann@xxxxxxxxxxxxx>; Alexei Starovoitov > <alexei.starovoitov@xxxxxxxxx>; John Fastabend > <john.fastabend@xxxxxxxxx>; Alexander Duyck > <alexander.duyck@xxxxxxxxx>; Jeff Kirsher <jeffrey.t.kirsher@xxxxxxxxx>; > David Ahern <dsahern@xxxxxxxxx>; Willem de Bruijn > <willemdebruijn.kernel@xxxxxxxxx>; Ilias Apalodimas > <ilias.apalodimas@xxxxxxxxxx>; Lorenzo Bianconi <lorenzo@xxxxxxxxxx>; > Saeed Mahameed <saeedm@xxxxxxxxxxxx> > Subject: [PATCH RFC v2 12/33] hv_netvsc: add XDP frame size to driver > > The hyperv NIC drivers XDP implementation is rather disappointing as it > will be a slowdown to enable XDP on this driver, given it will allocate a > new page for each packet and copy over the payload, before invoking the > XDP BPF-prog. As explained when I submit the XDP support for hv_netvsc -- without XDP, this driver already allocates memory and does a copy for every packet. So the page allocation for XDP data buf is not slower than the existing code path. Also, an optimization that only allocates a PAGE once, and re-uses it in a NAPI cycle will be done. And, my XDP implementation for hv_netvsc transparently passes xdp_prog to the associated VF NIC. Many of the Azure VMs are using SRIOV, so majority of the data are actually processed directly on the VF driver's XDP path. So the overhead of the synthetic data path (hv_netvsc) is minimal. Thanks, - Haiyang