On 17-04-19 10:17 AM, Alexei Starovoitov wrote: > On Wed, Apr 19, 2017 at 10:29:03AM -0400, Andy Gospodarek wrote: >> >> I ran this on top of a card that uses the bnxt_en driver on a desktop >> class system with an i7-6700 CPU @ 3.40GHz, sending a single stream of >> UDP traffic with flow control disabled and saw the following (all stats >> in Million PPS). >> >> xdp1 xdp2 xdp_tx_tunnel >> Generic XDP 7.8 5.5 (1.3 actual) 4.6 (1.1 actual) >> Optimized XDP 11.7 9.7 4.6 > > Nice! Thanks for testing. > >> One thing to note is that the Generic XDP case shows some different >> results for reported by the application vs actual (seen on the wire). I >> did not debug where the drops are happening and what counter needs to be >> incremented to note this -- I'll add that to my TODO list. The >> Optimized XDP case does not have a difference in reported vs actual >> frames on the wire. > > The missed packets are probably due to xmit queue being full. > We need 'xdp_tx_full' counter in: > + if (free_skb) { > + trace_xdp_exception(dev, xdp_prog, XDP_TX); > + kfree_skb(skb); > + } > like in-driver xdp does. > It's surprising that tx becomes full so often. May be bnxt specific behavior? hmm as a data point I get better numbers than 1.3Mpps running through the qdisc layer with pktgen so seems like something is wrong with the driver perhaps? If I get a chance I'll take a look with my setup here, although it likely wont be until the weekend. I don't think it needs to slow down dropping the RFC tag and getting the patch applied though. > >> I agree with all those who have asserted that this is great tool for >> those that want to get started with XDP but do not have hardware, so I'd >> say it's ready to have the 'RFC' tag dropped. Thanks for pushing this >> forward, Dave! :-) > > +1 > >