Re: [PATCH net-next 3/3] page_pool: Track DMA-mapped pages and unmap them when destroying the pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/14/2025 6:10 PM, Toke Høiland-Jørgensen wrote:

...


To avoid having to walk the entire xarray on unmap to find the page
reference, we stash the ID assigned by xa_alloc() into the page
structure itself, using the upper bits of the pp_magic field. This
requires a couple of defines to avoid conflicting with the
POINTER_POISON_DELTA define, but this is all evaluated at compile-time,
so does not affect run-time performance. The bitmap calculations in this
patch gives the following number of bits for different architectures:

- 24 bits on 32-bit architectures
- 21 bits on PPC64 (because of the definition of ILLEGAL_POINTER_VALUE)
- 32 bits on other 64-bit architectures

From commit c07aea3ef4d4 ("mm: add a signature in struct page"):
"The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0."

And commit 8a5e5e02fc83 ("include/linux/poison.h: fix LIST_POISON{1,2} offset"):
"Poison pointer values should be small enough to find a room in
non-mmap'able/hardly-mmap'able space."

So the question seems to be:
1. Is stashing the ID causing page->pp_magic to be in the mmap'able/
   easier-mmap'able space? If yes, how can we make sure this will not
   cause any security problem?
2. Is the masking the page->pp_magic causing a valid pionter for
   page->lru.next or page->compound_head to be treated as a vaild
   PP_SIGNATURE? which might cause page_pool to recycle a page not
   allocated via page_pool.


Since all the tracking is performed on DMA map/unmap, no additional code
is needed in the fast path, meaning the performance overhead of this
tracking is negligible. A micro-benchmark shows that the total overhead
of using xarray for this purpose is about 400 ns (39 cycles(tsc) 395.218
ns; sum for both map and unmap[1]). Since this cost is only paid on DMA
map and unmap, it seems like an acceptable cost to fix the late unmap

For most use cases when PP_FLAG_DMA_MAP is set and IOMMU is off, the
DMA map and unmap operation is almost negligible as said below, so the
cost is about 200% performance degradation, which doesn't seems like an
acceptable cost.

issue. Further optimisation can narrow the cases where this cost is
paid (for instance by eliding the tracking when DMA map/unmap is a
no-op).

The above was discussed in [1] and brought up again in [2], so cc
Robin to see if there is any clarifying to see if he still view the
above as misuse of DMA API.

1. https://lore.kernel.org/all/9a4d1357-f30d-420d-a575-7ae305ca6dda@xxxxxxxxxx/

2. https://lore.kernel.org/all/caf31b5e-0e8f-4844-b7ba-ef59ed13b74e@xxxxxxx/


The extra memory needed to track the pages is neatly encapsulated inside
xarray, which uses the 'struct xa_node' structure to track items. This
structure is 576 bytes long, with slots for 64 items, meaning that a
full node occurs only 9 bytes of overhead per slot it tracks (in
practice, it probably won't be this efficient, but in any case it should

Is there any debug infrastructure to know if it is not this efficient?
as there may be 576 byte overhead for a page for the worst case.

be an acceptable overhead).
> > [0] https://lore.kernel.org/all/CAHS8izPg7B5DwKfSuzz-iOop_YRbk3Sd6Y4rX7KBG9DcVJcyWg@xxxxxxxxxxxxxx/
[1] https://lore.kernel.org/r/ae07144c-9295-4c9d-a400-153bb689fe9e@xxxxxxxxxx

Reported-by: Yonglong Liu <liuyonglong@xxxxxxxxxx>
Closes: https://lore.kernel.org/r/8743264a-9700-4227-a556-5f931c720211@xxxxxxxxxx
Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
Suggested-by: Mina Almasry <almasrymina@xxxxxxxxxx>
Reviewed-by: Mina Almasry <almasrymina@xxxxxxxxxx>
Reviewed-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
Tested-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
Tested-by: Qiuling Ren <qren@xxxxxxxxxx>
Tested-by: Yuying Ma <yuma@xxxxxxxxxx>
Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>

...

@@ -1084,8 +1112,32 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
static void page_pool_scrub(struct page_pool *pool)
  {
+	unsigned long id;
+	void *ptr;
+
  	page_pool_empty_alloc_cache_once(pool);
-	pool->destroy_cnt++;
+	if (!pool->destroy_cnt++ && pool->dma_map) {
+		if (pool->dma_sync) {
+			/* paired with READ_ONCE in
+			 * page_pool_dma_sync_for_device() and
+			 * __page_pool_dma_sync_for_cpu()
+			 */
+			WRITE_ONCE(pool->dma_sync, false);
+
+			/* Make sure all concurrent returns that may see the old
+			 * value of dma_sync (and thus perform a sync) have
+			 * finished before doing the unmapping below. Skip the
+			 * wait if the device doesn't actually need syncing, or
+			 * if there are no outstanding mapped pages.
+			 */
+			if (dma_dev_need_sync(pool->p.dev) &&
+			    !xa_empty(&pool->dma_mapped))
+				synchronize_net();

I guess the above synchronize_net() is assuming that the above dma sync
API is always called in the softirq context, as it seems there is no
rcu read lock added in this patch to be paired with that.

Doesn't page_pool_put_page() might be called in non-softirq context when
allow_direct is false and in_softirq() returns false?

+		}
+
+		xa_for_each(&pool->dma_mapped, id, ptr)
+			__page_pool_release_page_dma(pool, page_to_netmem(ptr));
+	}
/* No more consumers should exist, but producers could still
  	 * be in-flight.






[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux