On 2021/5/14 17:17, Ilias Apalodimas wrote: > On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote: >> On 2021/5/14 15:36, Ilias Apalodimas wrote: >>> [...] >>>>> + return false; >>>>> + >>>>> + pp = (struct page_pool *)page->pp; >>>>> + >>>>> + /* Driver set this to memory recycling info. Reset it on recycle. >>>>> + * This will *not* work for NIC using a split-page memory model. >>>>> + * The page will be returned to the pool here regardless of the >>>>> + * 'flipped' fragment being in use or not. >>>>> + */ >>>>> + page->pp = NULL; >>>> >>>> Why not only clear the page->pp when the page can not be recycled >>>> by the page pool? so that we do not need to set and clear it every >>>> time the page is recycled。 >>>> >>> >>> If the page cannot be recycled, page->pp will not probably be set to begin >>> with. Since we don't embed the feature in page_pool and we require the >>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep >>> it as is. When we set/clear the page->pp, the page is probably already in >>> cache, so I doubt this will have any measurable impact. >> >> The point is that we already have the skb->pp_recycle to let driver to >> explicitly enable recycling, as part of the 'skb flow, if the page pool keep >> the page->pp while it owns the page, then the driver may only need to call >> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle() >> for each page frag of a skb. >> > > The driver is meant to call skb_mark_for_recycle for the skb and > page_pool_store_mem_info() for the fragments (in order to store page->pp). > Nothing bad will happen if you call skb_mark_for_recycle on a frag though, > but in any case you need to store the page_pool pointer of each frag to > struct page. Right. Nothing bad will happen when we keep the page_pool pointer in page->pp while page pool owns the page too, even if the skb->pp_recycle is not set, right? > >> Maybe we can add a parameter in "struct page_pool_params" to let driver >> to decide if the page pool ptr is stored in page->pp while the page pool >> owns the page? > > Then you'd have to check the page pool config before saving the meta-data, I am not sure what the "saving the meta-data" meant? > and you would have to make the skb path aware of that as well (I assume you > mean replace pp_recycle with this?). I meant we could set the in page->pp when the page is allocated from alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or according to a newly add filed in pool->p, and only clear it in page_pool_release_page(), between which the page is owned by page pool, right? > If not and you just want to add an extra flag on page_pool_params and be able > to enable recycling depending on that flag, we just add a patch afterwards. > I am not sure we need an extra if for each packet though. In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle, but not the pool->p. > >> >> Another thing accured to me is that if the driver use page from the >> page pool to form a skb, and it does not call skb_mark_for_recycle(), >> then there will be resource leaking, right? if yes, it seems the >> skb_mark_for_recycle() call does not seems to add any value? >> > > Not really, the driver has 2 choices: > - call page_pool_release_page() once it receives the payload. That will > clean up dma mappings (if page pool is responsible for them) and free the > buffer The is only needed before SKB recycling is supported or the driver does not want the SKB recycling support explicitly, right? > - call skb_mark_for_recycle(). Which will end up recycling the buffer. If the driver need to add extra flag to enable recycling based on skb instead of page pool, then adding skb_mark_for_recycle() makes sense to me too, otherwise it seems adding a field in pool->p to recycling based on skb makes more sense? > > If you call none of those, you'd leak a page, but that's a driver bug. > patches [4/5, 5/5] do that for two marvell drivers. > I really want to make drivers opt-in in the feature instead of always > enabling it. > > Thanks > /Ilias >> >>> >>>>> + page_pool_put_full_page(pp, virt_to_head_page(data), false); >>>>> + >>>>> C(end); >>> >>> [...] >> >> > > . >