Re: [RFC PATCH net-next v8 07/14] page_pool: devmem support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 30/04/2024 20.55, Jens Axboe wrote:
On 4/30/24 12:29 PM, Mina Almasry wrote:
On Tue, Apr 30, 2024 at 6:46?AM Jens Axboe<axboe@xxxxxxxxx>  wrote:
[...]
In general, attempting to hide overhead behind config options is always
a losing proposition. It merely serves to say "look, if these things
aren't enabled, the overhead isn't there", while distros blindly enable
pretty much everything and then you're back where you started.

The history there is that this check adds 1 cycle regression to the
page_pool fast path benchmark. The regression last I measured is 8->9
cycles, so in % wise it's a quite significant 12.5% (more details in
the cover letter[1]). I doubt I can do much better than that to be
honest.

I'm all for cycle counting, and do it myself too, but is that even
measurable in anything that isn't a super targeted microbenchmark? Or
even in that?

The reason for page_pool fast path being critical is that it is used for the XDP_DROP use-case. E.g on Mellanox mlx5 driver we see 24 Mpps XDP_DROP, which is approx 42 nanosec per packet. Adding 9 nanosec will reduce this to 19.6 Mpps.

  1/(42+9)*10^9 = 19607843

--Jesper

p.s. Upstreaming my PP microbenchmark[1] is still at the bottom of my todo-list. [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux