Yunsheng Lin <yunshenglin0825@xxxxxxxxx> writes: > On 3/14/2025 6:10 PM, Toke Høiland-Jørgensen wrote: > > ... > >> >> To avoid having to walk the entire xarray on unmap to find the page >> reference, we stash the ID assigned by xa_alloc() into the page >> structure itself, using the upper bits of the pp_magic field. This >> requires a couple of defines to avoid conflicting with the >> POINTER_POISON_DELTA define, but this is all evaluated at compile-time, >> so does not affect run-time performance. The bitmap calculations in this >> patch gives the following number of bits for different architectures: >> >> - 24 bits on 32-bit architectures >> - 21 bits on PPC64 (because of the definition of ILLEGAL_POINTER_VALUE) >> - 32 bits on other 64-bit architectures > > From commit c07aea3ef4d4 ("mm: add a signature in struct page"): > "The page->signature field is aliased to page->lru.next and > page->compound_head, but it can't be set by mistake because the > signature value is a bad pointer, and can't trigger a false positive > in PageTail() because the last bit is 0." > > And commit 8a5e5e02fc83 ("include/linux/poison.h: fix LIST_POISON{1,2} > offset"): > "Poison pointer values should be small enough to find a room in > non-mmap'able/hardly-mmap'able space." > > So the question seems to be: > 1. Is stashing the ID causing page->pp_magic to be in the mmap'able/ > easier-mmap'able space? If yes, how can we make sure this will not > cause any security problem? > 2. Is the masking the page->pp_magic causing a valid pionter for > page->lru.next or page->compound_head to be treated as a vaild > PP_SIGNATURE? which might cause page_pool to recycle a page not > allocated via page_pool. Right, so my reasoning for why the defines in this patch works for this is as follows: in both cases we need to make sure that the ID stashed in that field never looks like a valid kernel pointer. For 64-bit arches (where CONFIG_ILLEGAL_POINTER_VALUE), we make sure of this by never writing to any bits that overlap with the illegal value (so that the PP_SIGNATURE written to the field keeps it as an illegal pointer value). For 32-bit arches, we make sure of this by making sure the top-most bit is always 0 (the -1 in the define for _PP_DMA_INDEX_BITS) in the patch, which puts it outside the range used for kernel pointers (AFAICT). >> Since all the tracking is performed on DMA map/unmap, no additional code >> is needed in the fast path, meaning the performance overhead of this >> tracking is negligible. A micro-benchmark shows that the total overhead >> of using xarray for this purpose is about 400 ns (39 cycles(tsc) 395.218 >> ns; sum for both map and unmap[1]). Since this cost is only paid on DMA >> map and unmap, it seems like an acceptable cost to fix the late unmap > > For most use cases when PP_FLAG_DMA_MAP is set and IOMMU is off, the > DMA map and unmap operation is almost negligible as said below, so the > cost is about 200% performance degradation, which doesn't seems like an > acceptable cost. I disagree. This only impacts the slow path, as long as pages are recycled there is no additional cost. While your patch series has demonstrated that it is *possible* to reduce the cost even in the slow path, I don't think the complexity cost of this is worth it. [...] >> The extra memory needed to track the pages is neatly encapsulated inside >> xarray, which uses the 'struct xa_node' structure to track items. This >> structure is 576 bytes long, with slots for 64 items, meaning that a >> full node occurs only 9 bytes of overhead per slot it tracks (in >> practice, it probably won't be this efficient, but in any case it should > > Is there any debug infrastructure to know if it is not this efficient? > as there may be 576 byte overhead for a page for the worst case. There's an XA_DEBUG define which enables some dump functions, but I don't think there's any API to inspect the memory usage. I guess you could attach a BPF program and walk the structure, or something? >> + /* Make sure all concurrent returns that may see the old >> + * value of dma_sync (and thus perform a sync) have >> + * finished before doing the unmapping below. Skip the >> + * wait if the device doesn't actually need syncing, or >> + * if there are no outstanding mapped pages. >> + */ >> + if (dma_dev_need_sync(pool->p.dev) && >> + !xa_empty(&pool->dma_mapped)) >> + synchronize_net(); > > I guess the above synchronize_net() is assuming that the above dma sync > API is always called in the softirq context, as it seems there is no > rcu read lock added in this patch to be paired with that. Yup, that was my assumption. > Doesn't page_pool_put_page() might be called in non-softirq context when > allow_direct is false and in_softirq() returns false? I am not sure if this happens in practice in any of the delayed return paths we are worried about for this patch. If it does we could apply something like the diff below (on top of this patch). I can respin with this if needed, but I'll wait a bit and give others a chance to chime in. -Toke @@ -465,9 +465,13 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, netmem_ref netmem, u32 dma_sync_size) { - if ((READ_ONCE(pool->dma_sync) & PP_DMA_SYNC_DEV) && - dma_dev_need_sync(pool->p.dev)) - __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); + if (dma_dev_need_sync(pool->p.dev)) { + rcu_read_lock(); + if (READ_ONCE(pool->dma_sync) & PP_DMA_SYNC_DEV) + __page_pool_dma_sync_for_device(pool, netmem, + dma_sync_size); + rcu_read_unlock(); + } } static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp)