Hi Matthew, [...] > > And the contents of this page already came from that device ... if it > wanted to write bad data, it could already have done so. > > > > > (3) The page_pool is optimized for refcnt==1 case, and AFAIK TCP-RX > > > > zerocopy will bump the refcnt, which means the page_pool will not > > > > recycle the page when it see the elevated refcnt (it will instead > > > > release its DMA-mapping). > > > > > > Yes this is right but the userspace might have already consumed and > > > unmapped the page before the driver considers to recycle the page. > > > > That is a good point. So, there is a race window where it is possible > > to gain recycling. > > > > It seems my page_pool co-maintainer Ilias is interested in taking up the > > challenge to get this working with TCP RX zerocopy. So, lets see how > > this is doable. > > You could also check page_ref_count() - page_mapcount() instead of > just checking page_ref_count(). Assuming mapping/unmapping can't > race with recycling? > That's not a bad idea. As I explained on my last reply to Shakeel, I don't think the current patch will blow up anywhere. If the page is unmapped prior to kfree_skb() it will be recycled. If it's done in a reverse order, we'll just free the page entirely and will have to re-allocate it. The only thing I need to test is potential races (assuming those can even happen?). Trying to recycle the page outside of kfree_skb() means we'd have to 'steal' the page, during put_page() (or some function that's outside the networking scope). I think this is going to have a measurable performance penalty though not in networking, but in general. In any case, that should be orthogonal to the current patchset. So unless someone feels strongly about it, I'd prefer keeping the current code and trying to enable recycling in the skb zc case, when we have enough users of the API. Thanks /Ilias