On Mon, Dec 9, 2024 at 8:01 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > > On Wed, 4 Dec 2024 09:21:50 -0800 David Wei wrote: > > Then, either the buffer is dropped and returns back to the page pool > > into the ->freelist via io_pp_zc_release_netmem, in which case the page > > pool will match hold_cnt for us with ->pages_state_release_cnt. Or more > > likely the buffer will go through the network/protocol stacks and end up > > in the corresponding socket's receive queue. From there the user can get > > it via an new io_uring request implemented in following patches. As > > mentioned above, before giving a buffer to the user we bump the refcount > > by IO_ZC_RX_UREF. > > > > Once the user is done with the buffer processing, it must return it back > > via the refill queue, from where our ->alloc_netmems implementation can > > grab it, check references, put IO_ZC_RX_UREF, and recycle the buffer if > > there are no more users left. As we place such buffers right back into > > the page pools fast cache and they didn't go through the normal pp > > release path, they are still considered "allocated" and no pp hold_cnt > > is required. For the same reason we dma sync buffers for the device > > in io_zc_add_pp_cache(). > > Can you say more about the IO_ZC_RX_UREF bias? net_iov is not the page > struct, we can add more fields. In fact we have 8B of padding in it > that can be allocated without growing the struct. So why play with > biases? You can add a 32b atomic counter for how many refs have been > handed out to the user. Great idea IMO. I would prefer niov->pp_frag_ref to remain reserved for pp refs used by dereferencing paths shared with pages and devmem like napi_pp_put_page. Using an empty field in net_iov would alleviate that concern. I think I suggested something similar on v7, although maybe I suggested putting it in an io_uring specific struct that hangs off the net_iov to keep anything memory type specific outside of net_iov, but a new field in net_iov is fine IMO. -- Thanks, Mina