On 3/8/25 10:40 PM, Mina Almasry wrote: > Currently net_iovs support only pp ref counts, and do not support a > page ref equivalent. > > This is fine for the RX path as net_iovs are used exclusively with the > pp and only pp refcounting is needed there. The TX path however does not > use pp ref counts, thus, support for get_page/put_page equivalent is > needed for netmem. > > Support get_netmem/put_netmem. Check the type of the netmem before > passing it to page or net_iov specific code to obtain a page ref > equivalent. > > For dmabuf net_iovs, we obtain a ref on the underlying binding. This > ensures the entire binding doesn't disappear until all the net_iovs have > been put_netmem'ed. We do not need to track the refcount of individual > dmabuf net_iovs as we don't allocate/free them from a pool similar to > what the buddy allocator does for pages. > > This code is written to be extensible by other net_iov implementers. > get_netmem/put_netmem will check the type of the netmem and route it to > the correct helper: > > pages -> [get|put]_page() > dmabuf net_iovs -> net_devmem_[get|put]_net_iov() > new net_iovs -> new helpers > > Signed-off-by: Mina Almasry <almasrymina@xxxxxxxxxx> > Acked-by: Stanislav Fomichev <sdf@xxxxxxxxxxx> > > --- > > v5: https://lore.kernel.org/netdev/20250227041209.2031104-2-almasrymina@xxxxxxxxxx/ > > - Updated to check that the net_iov is devmem before calling > net_devmem_put_net_iov(). > > - Jakub requested that callers of __skb_frag_ref()/skb_page_unref be > inspected to make sure that they generate / anticipate skbs with the > correct pp_recycle and unreadable setting: > > skb_page_unref > ============== > > - callers that are unreachable for unreadable skbs: > > gro_pull_from_frag0, skb_copy_ubufs, __pskb_pull_tail Why `__pskb_pull_tail` is not reachable? it's called by __pskb_trim(), via skb_condense(). /P