On 2023/11/8 5:59, Mina Almasry wrote: > On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote: >> >> On 2023/11/6 10:44, Mina Almasry wrote: >>> + >>> +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) >>> +{ >>> + size_t size, avail; >>> + >>> + gen_pool_for_each_chunk(binding->chunk_pool, >>> + netdev_devmem_free_chunk_owner, NULL); >>> + >>> + size = gen_pool_size(binding->chunk_pool); >>> + avail = gen_pool_avail(binding->chunk_pool); >>> + >>> + if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu", >>> + size, avail)) >>> + gen_pool_destroy(binding->chunk_pool); >> >> >> Is there any other place calling the gen_pool_destroy() when the above >> warning is triggered? Do we have a leaking for binding->chunk_pool? >> > > gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. > Technically that should never happen, because > __netdev_devmem_binding_free() should only be called when the refcount > hits 0, so all the chunks have been freed back to the gen_pool. But, > just in case, I don't want to crash the server just because I'm > leaking a chunk... this is a bit of defensive programming that is > typically frowned upon, but the behavior of gen_pool is so severe I > think the WARN() + check is warranted here. It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below: https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgit@firesoul/ And currently page pool core handles that by using a workqueue.