On 21/11/2024 8:04 am, Yunsheng Lin wrote:
On 2024/11/21 0:17, Robin Murphy wrote:
On 20/11/2024 10:34 am, Yunsheng Lin wrote:
Skip dma sync operation for inflight pages before the
page_pool_destroy() returns to the driver as DMA API
expects to be called with a valid device bound to a
driver as mentioned in [1].
After page_pool_destroy() is called, the page is not
expected to be recycled back to pool->alloc cache and
dma sync operation is not needed when the page is not
recyclable or pool->ring is full, so only skip the dma
sync operation for the infilght pages by clearing the
pool->dma_sync under protection of rcu lock when page
is recycled to pool->ring to ensure that there is no
dma sync operation called after page_pool_destroy() is
returned.
Something feels off here - either this is a micro-optimisation which I wouldn't really expect to be meaningful, or it means patch #2 doesn't actually do what it claims. If it really is possible to attempt to dma_sync a page *after* page_pool_inflight_unmap() has already reclaimed and unmapped it, that represents yet another DMA API lifecycle issue, which as well as being even more obviously incorrect usage-wise, could also still lead to the same crash (if the device is non-coherent).
For a page_pool owned page, it mostly goes through the below steps:
1. page_pool calls buddy allocator API to allocate a page, call DMA mapping
and sync_for_device API for it if its pool is empty. Or reuse the page in
pool.
2. Driver calls the page_pool API to allocate the page, and pass the page
to network stack after packet is dma'ed into the page and the sync_for_cpu
API is called.
3. Network stack is done with page and called page_pool API to free the page.
4. page_pool releases the page back to buddy allocator if the page is not
recyclable before doing the dma unmaping. Or do the sync_for_device
and put the page in the its pool, the page might go through step 1
again if the driver calls the page_pool allocate API.
The calling of dma mapping and dma sync API is controlled by pool->dma_map
and pool->dma_sync respectively, the previous patch only clear pool->dma_map
after doing the dma unmapping. This patch ensures that there is no dma_sync
for recycle case of step 4 by clearing pool->dma_sync.
But *why* does it want to ensure that? Is there some possible race where
one thread can attempt to sync and recycle a page while another thread
is attempting to unmap and free it, such that you can't guarantee the
correctness of dma_sync calls after page_pool_inflight_unmap() has
started, and skipping them is a workaround for that? If so, then frankly
I think that would want solving properly, but at the very least this
change would need to come before patch #2.
If not, and this is just some attempt at performance micro-optimisation,
then I'd be keen to see the numbers to justify it, since I struggle to
imagine it being worth the bother while already in the process of
spending whole seconds scanning memory...
Thanks,
Robin.
The dma_sync skipping should also happen before page_pool_inflight_unmap()
is called too because all the caller will see the clearing of pool->dma_sync
after synchronize_rcu() and page_pool_inflight_unmap() is called after
the same synchronize_rcu() in page_pool_destroy().
Otherwise, I don't imagine it's really worth worrying about optimising out syncs for any pages which happen to get naturally returned after page_pool_destroy() starts but before they're explicitly reclaimed. Realistically, the kinds of big server systems where reclaim takes an appreciable amount of time are going to be coherent and skipping syncs anyway.
The skipping is about skipping the dma sync for those inflight pages,
I should make it clearer that the skipping happens before the calling
of page_pool_inflight_unmap() instead of page_pool_destroy() in the
commit log.