Re: [PATCH v5 net-next 1/3] net: introduce page_pool pointer in softnet_data percpu struct

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Wed, 17 Jan 2024 18:36:25 +0100 Lorenzo Bianconi wrote:
> > I would resume this activity and it seems to me there is no a clear direction
> > about where we should add the page_pool (in a per_cpu pointer or in
> > netdev_rx_queue struct) or if we can rely on page_frag_cache instead.
> > 
> > @Jakub: what do you think? Should we add a page_pool in a per_cpu pointer?

Hi Jakub,

> 
> Let's try to summarize. We want skb reallocation without linearization
> for XDP generic. We need some fast-ish way to get pages for the payload.

correct

> 
> First, options for placing the allocator:
>  - struct netdev_rx_queue
>  - per-CPU
> 
> IMO per-CPU has better scaling properties - you're less likely to
> increase the CPU count to infinity than spawn extra netdev queues.

ack

> 
> The second question is:
>  - page_frag_cache
>  - page_pool
> 
> I like the page pool because we have an increasing amount of infra for
> it, and page pool is already used in veth, which we can hopefully also
> de-duplicate if we have a per-CPU one, one day. But I do agree that
> it's not a perfect fit.
> 
> To answer Jesper's questions:
>  ad1. cache size - we can lower the cache to match page_frag_cache, 
>       so I think 8 entries? page frag cache can give us bigger frags 
>       and therefore lower frag count, so that's a minus for using 
>       page pool
>  ad2. nl API - we can extend netlink to dump unbound page pools fairly
>       easily, I didn't want to do it without a clear use case, but I
>       don't think there are any blockers
>  ad3. locking - a bit independent of allocator but fair point, we assume
>       XDP generic or Rx path for now, so sirq context / bh locked out
>  ad4. right, well, right, IDK what real workloads need, and whether 
>       XDP generic should be optimized at all.. I personally lean
>       towards "no"
>  
> Sorry if I haven't helped much to clarify the direction :)
> I have no strong preference on question #2, I would prefer to not add
> per-queue state for something that's in no way tied to the device
> (question #1 -> per-CPU). 

Relying on netdev_alloc_cache/napi_alloc_cache will have the upside of reusing
current infrastructure (iirc my first revision used this approach).
The downside is we can't unify the code with veth driver.
There other way around adding per-cpu page_pools :). Anyway I am fine to have a
per-cpu page_pool similar to netdev_alloc_cache/napi_alloc_cache.

@Jesper/Ilias: what do you think?

> 
> You did good perf analysis of the options, could you share it here
> again?
> 

copying them out from my previous tests:

v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01 ==(XDP_REDIRECT)==> v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11

- v00: iperf3 client (pinned on core 0)
- v11: iperf3 server (pinned on core 7)

net-next veth codebase (page_pool APIs):
=======================================
- MTU  1500: ~ 5.42 Gbps
- MTU  8000: ~ 14.1 Gbps
- MTU 64000: ~ 18.4 Gbps

net-next veth codebase + netdev_alloc_cache/napi_alloc_cache:
=============================================================
- MTU  1500: ~ 6.62 Gbps
- MTU  8000: ~ 14.7 Gbps
- MTU 64000: ~ 19.7 Gbps

xdp_generic codebase + netdev_alloc_cache/napi_alloc_cache:
===========================================================
- MTU  1500: ~ 6.41 Gbps
- MTU  8000: ~ 14.2 Gbps
- MTU 64000: ~ 19.8 Gbps

xdp_generic codebase + page_pool in netdev_rx_queue:
====================================================
- MTU  1500: ~ 5.75 Gbps
- MTU  8000: ~ 15.3 Gbps
- MTU 64000: ~ 21.2 Gbps

IIRC relying on per-cpu page_pool has similar results of adding them in netdev_rx_queue,
but I can test them again with an updated kernel.

Regards,
Lorenzo

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux