On 2024/9/20 13:29, Ilias Apalodimas wrote: > Hi Jesper, > > On Fri, 20 Sept 2024 at 00:04, Jesper Dangaard Brouer <hawk@xxxxxxxxxx> wrote: >> >> >> >> On 19/09/2024 13.15, Yunsheng Lin wrote: >>> On 2024/9/19 17:42, Jesper Dangaard Brouer wrote: >>>> >>>> On 18/09/2024 19.06, Ilias Apalodimas wrote: >>>>>> In order not to do the dma unmmapping after driver has already >>>>>> unbound and stall the unloading of the networking driver, add >>>>>> the pool->items array to record all the pages including the ones >>>>>> which are handed over to network stack, so the page_pool can >>>>>> do the dma unmmapping for those pages when page_pool_destroy() >>>>>> is called. >>>>> >>>>> So, I was thinking of a very similar idea. But what do you mean by >>>>> "all"? The pages that are still in caches (slow or fast) of the pool >>>>> will be unmapped during page_pool_destroy(). >>>> >>>> I really dislike this idea of having to keep track of all outstanding pages. >>>> >>>> I liked Jakub's idea of keeping the netdev around for longer. >>>> >>>> This is all related to destroying the struct device that have points to >>>> the DMA engine, right? >>> >>> Yes, the problem seems to be that when device_del() is called, there is >>> no guarantee hw behind the 'struct device ' will be usable even if we >>> call get_device() on it. >>> >>>> >>>> Why don't we add an API that allow netdev to "give" struct device to >>>> page_pool. And then the page_poll will take over when we can safely >>>> free the stuct device? >>> >>> By 'allow netdev to "give" struct device to page_pool', does it mean >>> page_pool become the driver for the device? >>> If yes, it seems that is similar to jakub's idea, as both seems to stall >>> the calling of device_del() by not returning when the driver unloading. >> >> Yes, this is what I mean. (That is why I mentioned Jakub's idea). I am not sure what dose the API that allows netdev to "give" struct device to page_pool look like or how to implement the API yet, but the obvious way to stall the calling of device_del() is to wait for the inflight page to come back in page_pool_destroy(), which seems the same as the jakub's way from the viewpoint of user, and jakub's way seems more elegant than waiting in page_pool_destroy(). > > Keeping track of inflight packets that need to be unmapped is > certainly more complex. Delaying the netdevice destruction certainly > solves the problem but there's a huge cost IMHO. Those devices might > stay there forever and we have zero guarantees that the network stack > will eventually release (and unmap) those packets. What happens in > that case? The user basically has to reboot the entire machine, just > because he tries to bring an interface down and up again. Yes. The problem seems to be how long page_pool is allowed to stall the driver unloading? Does the driver unload stalling affect some feature like device hotplug? As the problem in [1], the stall might be forever due to caching in the network stack as discussed in [2], and there might be some other caching we don't know yet. The stalling log in [1] is caused by the caching in skb_attempt_defer_free(), we may argue that a timeout is needed for those kind of caching, but Eric seemed to think otherwise in commit log of [3]: "As Eric pointed out/predicted there's no guarantee that applications will read / close their sockets so a page pool page may be stuck in a socket (but not leaked) forever." 1. https://lore.kernel.org/netdev/20240814075603.05f8b0f5@xxxxxxxxxx/T/#me2f2c89fbeb7f92a27d54a85aab5527efedfe260 2. https://lore.kernel.org/netdev/20240814075603.05f8b0f5@xxxxxxxxxx/T/#m2687f25537395401cd6a810ac14e0e0d9addf97e 3. https://lore.kernel.org/netdev/ZWfuyc13oEkp583C@xxxxxxxxxxxxxx/T/ > > Thanks > /Ilias >> >> >>> If no, it seems that the problem is still existed when the driver for >>> the device has unbound after device_del() is called.