Yunsheng Lin <linyunsheng@xxxxxxxxxx> writes: >>> I would prefer the waiting too if simple waiting fixed the test cases that >>> Youglong and Haiqing were reporting and I did not look into the rabbit hole >>> of possible caching in networking. >>> >>> As mentioned in commit log and [1]: >>> 1. ipv4 packet defragmentation timeout: this seems to cause delay up to 30 >>> secs, which was reported by Haiqing. >>> 2. skb_defer_free_flush(): this may cause infinite delay if there is no >>> triggering for net_rx_action(), which was reported by Yonglong. >>> >>> For case 1, is it really ok to stall the driver unbound up to 30 secs for the >>> default setting of defragmentation timeout? >>> >>> For case 2, it is possible to add timeout for those kind of caching like the >>> defragmentation timeout too, but as mentioned in [2], it seems to be a normal >>> thing for this kind of caching in networking: >> >> Both 1 and 2 seem to be cases where the netdev teardown code can just >> make sure to kick the respective queues and make sure there's nothing >> outstanding (for (1), walk the defrag cache and clear out anything >> related to the netdev going away, for (2) make sure to kick >> net_rx_action() as part of the teardown). > > It would be good to be more specific about the 'kick' here, does it mean > taking the lock and doing one of below action for each cache instance? > 1. flush all the cache of each cache instance. > 2. scan for the page_pool owned page and do the finegrained flushing. Depends on the context. The page pool is attached to a device, so it should be possible to walk the skb frags queue and just remove any skbs that refer to that netdevice, or something like that. As for the lack of net_rx_action(), this is related to the deferred freeing of skbs, so it seems like just calling skb_defer_free_flush() on teardown could be an option. >>> "Eric pointed out/predicted there's no guarantee that applications will >>> read / close their sockets so a page pool page may be stuck in a socket >>> (but not leaked) forever." >> >> As for this one, I would put that in the "well, let's see if this >> becomes a problem in practice" bucket. > > As the commit log in [2], it seems it is already happening. > > Those cache are mostly per-cpu and per-socket, and there may be hundreds of > CPUs and thousands of sockets in one system, are you really sure we need > to take the lock of each cache instance, which may be thousands of them, > and do the flushing/scaning of memory used in networking, which may be as > large as '24 GiB' mentioned by Jesper? Well, as above, the two issues you mentioned are per-netns (or possibly per-CPU), so those seem to be manageable to do on device teardown if the wait is really a problem. But, well, I'm not sure it is? You seem to be taking it as axiomatic that the wait in itself is bad. Why? It's just a bit memory being held on to while it is still in use, and so what? -Toke