Re: [PATCH v3 00/26] Split netmem from struct page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/01/2023 14.21, Matthew Wilcox wrote:
On Wed, Jan 11, 2023 at 04:25:46PM +0800, Yunsheng Lin wrote:
On 2023/1/11 12:21, Matthew Wilcox (Oracle) wrote:
The MM subsystem is trying to reduce struct page to a single pointer.
The first step towards that is splitting struct page by its individual
users, as has already been done with folio and slab.  This patchset does
that for netmem which is used for page pools.
As page pool is only used for rx side in the net stack depending on the
driver, a lot more memory for the net stack is from page_frag_alloc_align(),
kmem cache, etc.
naming it netmem seems a little overkill, perhaps a more specific name for
the page pool? such as pp_cache.

@Jesper & Ilias
Any better idea?

I like the 'netmem' name.

And it seem some API may need changing too, as we are not pooling 'pages'
now.

IMHO it would be overkill to rename the page_pool to e.g. netmem_pool.
as it would generate too much churn and will be hard to follow in git
as the code filename page_pool.c would also have to be renamed.
It guess we keep page_pool for historical reasons ;-)

I raised the question of naming in v1, six weeks ago, and nobody had
any better names.  Seems a little unfair to ignore the question at first
and then bring it up now.  I'd hate to miss the merge window because of
a late-breaking major request like this.

https://lore.kernel.org/netdev/20221130220803.3657490-1-willy@xxxxxxxxxxxxx/

I'd like to understand what we think we'll do in networking when we trim
struct page down to a single pointer,  All these usages that aren't from
page_pool -- what information does networking need to track per-allocation?
Would it make sense for the netmem to describe all memory used by the
networking stack, and have allocators other than page_pool also return
netmem,

This is also how I see the future, that other netstack "allocators" can
return and work-with 'netmem' objects.   IMHO we are already cramming
too many use-cases into page_pool (like the frag support Yunsheng
added).  IMHO there are room for other netstack "allocators" that can
utilize netmem.  The page_pool is optimized for RX-NAPI workloads, using
it for other purposes is a mistake IMHO.  People should create other
netstack "allocators" that solves their specific use-cases.  E.g. The TX
path likely needs another "allocator" optimized for this TX use-case.

or does the normal usage of memory in the net stack not need to
track that information?

The page refcnt is (obviously) used by netstack as tracked information.
I have seen drivers that use the DMA mapping directly in page/'netmem',
instead of having to store this separately in the drivers.

--Jesper





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux