From: Yunsheng Lin <linyunsheng@xxxxxxxxxx> Date: Tue, 14 Mar 2023 19:37:23 +0800 > On 2023/3/14 5:55, Alexander Lobakin wrote: >> __xdp_build_skb_from_frame() was the last user of >> {__,}xdp_release_frame(), which detaches pages from the page_pool. [...] >> -/* Only called for MEM_TYPE_PAGE_POOL see xdp.h */ >> -void __xdp_release_frame(void *data, struct xdp_mem_info *mem) >> -{ >> - struct xdp_mem_allocator *xa; >> - struct page *page; >> - >> - rcu_read_lock(); >> - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); >> - page = virt_to_head_page(data); >> - if (xa) >> - page_pool_release_page(xa->page_pool, page); > > page_pool_release_page() is only call here when xa is not NULL > and mem->type == MEM_TYPE_PAGE_POOL. > > But skb_mark_for_recycle() is call when mem->type == MEM_TYPE_PAGE_POOL > without checking xa, it does not seems symmetric to patch 3, if this is > intended? Intended. page_pool_return_skb_page() checks for %PP_SIGNATURE and if a page doesn't belong to any PP, it will be returned to the MM layer. Moreover, cases `mem->type == MEM_TYPE_PAGE_POOL && xa == NULL` are more of an exception rather than regular -- this means the page was released from its PP before reaching the function and IIRC it's even impossible with our current drivers. Adding a hashtable lookup to {__,}xdp_build_skb_from_frame() would only add hotpath overhead with no positive impact. > >> - rcu_read_unlock(); >> -} >> -EXPORT_SYMBOL_GPL(__xdp_release_frame); >> - >> void xdp_attachment_setup(struct xdp_attachment_info *info, >> struct netdev_bpf *bpf) >> { >> Thanks, Olek