> On Wed, 6 Dec 2023 13:41:49 +0100 Jesper Dangaard Brouer wrote: > > BUT then I realized that PP have a weakness, which is the return/free > > path that need to take a normal spin_lock, as that can be called from > > any CPU (unlike the RX/alloc case). Thus, I fear that making multiple > > devices share a page_pool via softnet_data, increase the chance of lock > > contention when packets are "freed" returned/recycled. > > I was thinking we can add a pcpu CPU ID to page pool so that > napi_pp_put_page() has a chance to realize that its on the "right CPU" > and feed the cache directly. Are we going to use these page_pools just for virtual devices (e.g. veth) or even for hw NICs? If we do not bound the page_pool to a netdevice I think we can't rely on it to DMA map/unmap the buffer, right? Moreover, are we going to rework page_pool stats first? It seems a bit weird to have a percpu struct with a percpu pointer in it, right? Regards, Lorenzo
Attachment:
signature.asc
Description: PGP signature