Re: [PATCH net-next] xdp: improve page_pool xdp_return performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes:

> During LPC2022 I meetup with my page_pool co-maintainer Ilias. When
> discussing page_pool code we realised/remembered certain optimizations
> had not been fully utilised.
>
> Since commit c07aea3ef4d4 ("mm: add a signature in struct page") struct
> page have a direct pointer to the page_pool object this page was
> allocated from.
>
> Thus, with this info it is possible to skip the rhashtable_lookup to
> find the page_pool object in __xdp_return().
>
> The rcu_read_lock can be removed as it was tied to xdp_mem_allocator.
> The page_pool object is still safe to access as it tracks inflight pages
> and (potentially) schedules final release from a work queue.
>
> Created a micro benchmark of XDP redirecting from mlx5 into veth with
> XDP_DROP bpf-prog on the peer veth device. This increased performance
> 6.5% from approx 8.45Mpps to 9Mpps corresponding to using 7 nanosec
> (27 cycles at 3.8GHz) less per packet.
>
> Suggested-by: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx>
> Signed-off-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>

Nice! The two of you should get together in person more often ;)

Acked-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux