Re: [PATCH net-next v6 5/6] page_pool: update document about frag API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi--

On 8/14/23 05:56, Yunsheng Lin wrote:
As more drivers begin to use the frag API, update the
document about how to decide which API to use for the
driver author.

Signed-off-by: Yunsheng Lin <linyunsheng@xxxxxxxxxx>
CC: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
CC: Alexander Duyck <alexander.duyck@xxxxxxxxx>
CC: Liang Chen <liangchen.linux@xxxxxxxxx>
CC: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
---
  Documentation/networking/page_pool.rst |  4 +-
  include/net/page_pool/helpers.h        | 58 +++++++++++++++++++++++---
  2 files changed, 55 insertions(+), 7 deletions(-)


diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index b920224f6584..0f1eaa2986f9 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -8,13 +8,28 @@
  /**
   * DOC: page_pool allocator
   *
- * The page_pool allocator is optimized for the XDP mode that
- * uses one frame per-page, but it can fallback on the
- * regular page allocator APIs.
+ * The page_pool allocator is optimized for recycling page or page frag used by
+ * skb packet and xdp frame.
   *
- * Basic use involves replacing alloc_pages() calls with the
- * page_pool_alloc_pages() call.  Drivers should use
- * page_pool_dev_alloc_pages() replacing dev_alloc_pages().
+ * Basic use involves replacing napi_alloc_frag() and alloc_pages() calls with
+ * page_pool_cache_alloc() and page_pool_alloc(), which allocate memory with or
+ * without page splitting depending on the requested memory size.
+ *
+ * If the driver knows that it always requires full pages or its allocates are

                                                             allocations

+ * always smaller than half a page, it can use one of the more specific API
+ * calls:
+ *
+ * 1. page_pool_alloc_pages(): allocate memory without page splitting when
+ * driver knows that the memory it need is always bigger than half of the page
+ * allocated from page pool. There is no cache line dirtying for 'struct page'
+ * when a page is recycled back to the page pool.
+ *
+ * 2. page_pool_alloc_frag(): allocate memory with page splitting when driver
+ * knows that the memory it need is always smaller than or equal to half of the
+ * page allocated from page pool. Page splitting enables memory saving and thus
+ * avoid TLB/cache miss for data access, but there also is some cost to

      avoids

+ * implement page splitting, mainly some cache line dirtying/bouncing for
+ * 'struct page' and atomic operation for page->pp_frag_count.
   *
   * API keeps track of in-flight pages, in order to let API user know
   * when it is safe to free a page_pool object.  Thus, API users
@@ -100,6 +115,14 @@ static inline struct page *page_pool_alloc_frag(struct page_pool *pool,
  	return __page_pool_alloc_frag(pool, offset, size, gfp);
  }
+/**
+ * page_pool_dev_alloc_frag() - allocate a page frag.
+ * @pool[in]	pool from which to allocate
+ * @offset[out]	offset to the allocated page
+ * @size[in]	requested size

Please use kernel-doc syntax/notation here.

+ *
+ * Get a page frag from the page allocator or page_pool caches.
+ */
  static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
  						    unsigned int *offset,
  						    unsigned int size)
@@ -143,6 +166,14 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
  	return page;
  }
+/**
+ * page_pool_dev_alloc() - allocate a page or a page frag.
+ * @pool[in]:		pool from which to allocate
+ * @offset[out]:	offset to the allocated page
+ * @size[in, out]:	in as the requested size, out as the allocated size

and here.

+ *
+ * Get a page or a page frag from the page allocator or page_pool caches.
+ */
  static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
  					       unsigned int *offset,
  					       unsigned int *size)
@@ -165,6 +196,13 @@ static inline void *page_pool_cache_alloc(struct page_pool *pool,
  	return page_address(page) + offset;
  }
+/**
+ * page_pool_dev_cache_alloc() - allocate a cache.
+ * @pool[in]:		pool from which to allocate
+ * @size[in, out]:	in as the requested size, out as the allocated size

and here.

+ *
+ * Get a cache from the page allocator or page_pool caches.
+ */
  static inline void *page_pool_dev_cache_alloc(struct page_pool *pool,
  					      unsigned int *size)
  {
@@ -316,6 +354,14 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
  	page_pool_put_full_page(pool, page, true);
  }
+/**
+ * page_pool_cache_free() - free a cache into the page_pool
+ * @pool[in]:		pool from which cache was allocated
+ * @data[in]:		cache to free
+ * @allow_direct[in]:	freed by the consumer, allow lockless caching

and here.

+ *
+ * Free a cache allocated from page_pool_dev_cache_alloc().
+ */
  static inline void page_pool_cache_free(struct page_pool *pool, void *data,
  					bool allow_direct)
  {

Thanks.




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux