Re: [PATCH net-next v4 5/5] net: Document netmem driver support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/11/2024 1:20 PM, Mina Almasry wrote:

Document expectations from drivers looking to add support for device
memory tcp or other netmem based features.

Signed-off-by: Mina Almasry <almasrymina@xxxxxxxxxx>

Hi Mina,

Just a couple thoughts as this passed by me. These can be saved for a later update if the rest of this patchset is ready to go.


---

v4:
- Address comments from Randy.
- Change docs to netmem focus (Jakub).
- Address comments from Jakub.

---
  Documentation/networking/index.rst  |  1 +
  Documentation/networking/netmem.rst | 62 +++++++++++++++++++++++++++++
  2 files changed, 63 insertions(+)
  create mode 100644 Documentation/networking/netmem.rst

diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst
index 46c178e564b3..058193ed2eeb 100644
--- a/Documentation/networking/index.rst
+++ b/Documentation/networking/index.rst
@@ -86,6 +86,7 @@ Contents:
     netdevices
     netfilter-sysctl
     netif-msg
+   netmem
     nexthop-group-resilient
     nf_conntrack-sysctl
     nf_flowtable
diff --git a/Documentation/networking/netmem.rst b/Documentation/networking/netmem.rst
new file mode 100644
index 000000000000..f9f03189c53c
--- /dev/null
+++ b/Documentation/networking/netmem.rst
@@ -0,0 +1,62 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================
+Netmem
+================
+
+
+Introduction
+============
+
+Device memory TCP, and likely more upcoming features, are reliant on netmem

Device memory TCP is singular, so s/are/is/

+support in the driver. This outlines what drivers need to do to support netmem.

Can we get a summary of what netmem itself is and what it is for? There is a bit of explanation buried below in (3), but it would be good to have something here at the top.

+
+
+Driver support
+==============
+
+1. The driver must support page_pool. The driver must not do its own recycling
+   on top of page_pool.
+
+2. The driver must support the tcp-data-split ethtool option.
+
+3. The driver must use the page_pool netmem APIs. The netmem APIs are
+   currently 1-to-1 correspond with page APIs. Conversion to netmem should be
+   achievable by switching the page APIs to netmem APIs and tracking memory via
+   netmem_refs in the driver rather than struct page * :
+
+   - page_pool_alloc -> page_pool_alloc_netmem
+   - page_pool_get_dma_addr -> page_pool_get_dma_addr_netmem
+   - page_pool_put_page -> page_pool_put_netmem
+
+   Not all page APIs have netmem equivalents at the moment. If your driver
+   relies on a missing netmem API, feel free to add and propose to netdev@ or
+   reach out to almasrymina@xxxxxxxxxx for help adding the netmem API.

You may want to replace your name with "the maintainers" and let the MAINTAINERS file keep track of who currently takes care of netmem things, rather than risk this email getting stale and forgotten.


+
+4. The driver must use the following PP_FLAGS:
+
+   - PP_FLAG_DMA_MAP: netmem is not dma-mappable by the driver. The driver
+     must delegate the dma mapping to the page_pool.

This is a bit confusing... if not dma-mappable, then why use PP_FLAG_DMA_MAP to ask page_pool to do it? A little more info might be useful such as, " ... must delegate the dma mapping to the page_pool which knows when dma-mapping is or is not appropriate".

Thanks,
sln


+   - PP_FLAG_DMA_SYNC_DEV: netmem dma addr is not necessarily dma-syncable
+     by the driver. The driver must delegate the dma syncing to the page_pool.
+   - PP_FLAG_ALLOW_UNREADABLE_NETMEM. The driver must specify this flag iff
+     tcp-data-split is enabled.
+
+5. The driver must not assume the netmem is readable and/or backed by pages.
+   The netmem returned by the page_pool may be unreadable, in which case
+   netmem_address() will return NULL. The driver must correctly handle
+   unreadable netmem, i.e. don't attempt to handle its contents when
+   netmem_address() is NULL.
+
+   Ideally, drivers should not have to check the underlying netmem type via
+   helpers like netmem_is_net_iov() or convert the netmem to any of its
+   underlying types via netmem_to_page() or netmem_to_net_iov(). In most cases,
+   netmem or page_pool helpers that abstract this complexity are provided
+   (and more can be added).
+
+6. The driver must use page_pool_dma_sync_netmem_for_cpu() in lieu of
+   dma_sync_single_range_for_cpu(). For some memory providers, dma_syncing for
+   CPU will be done by the page_pool, for others (particularly dmabuf memory
+   provider), dma syncing for CPU is the responsibility of the userspace using
+   dmabuf APIs. The driver must delegate the entire dma-syncing operation to
+   the page_pool which will do it correctly.
--
2.47.0.338.g60cca15819-goog







[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux