Re: [PATCH net-next -v5 3/4] mm: introduce __get_page() and __put_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 12, 2021 at 03:38:15PM +0800, Yunsheng Lin wrote:
> On 2021/10/11 20:29, Ilias Apalodimas wrote:
> > On Mon, Oct 11, 2021 at 02:25:08PM +0200, Jesper Dangaard Brouer wrote:
> >>
> >>
> >> On 09/10/2021 21.49, John Hubbard wrote:
> >>> So in case it's not clear, I'd like to request that you drop this one
> >>> patch from your series.
> >>
> >> In my opinion as page_pool maintainer, you should also drop patch 4/4 from
> >> this series.
> >>
> >> I like the first two patches, and they should be resend and can be applied
> >> without too much further discussion.
> > 
> > +1
> 
> Ok, it seems there is a lot of contention about how to avoid calling
> compound_head() now.
> 

IMHO compound head is not that heavy.  So you could keep the get/put page
calls as-is and worry about micro optimizations later,  especially since
it's intersecting with folio changes atm.

> Will send out the uncontroversial one first.
> 

Thanks!

> > That's what I hinted on the previous version. The patches right now go way
> > beyond the spec of page pool.  We are starting to change core networking
> > functions and imho we need a lot more people involved in this discussion,
> > than the ones participating already.
> > 
> > As a general note and the reason I am so hesitant,  is that we are starting
> > to violate layers here (at least in my opinion).  When the recycling was
> > added,  my main concern was to keep the network stack unaware (apart from
> > the skb bit).  Now suddenly we need to teach frag_ref/unref internal page
> 
> Maybe the skb recycle bit is a clever way to avoid dealing with the network
> stack directly.
> 
> But that bit might also introduce or hide some problem, like the data race
> as pointed out by Alexander, and the odd using of page pool in mlx5 driver.

Yea.  I was always wondering if unmaping the buffers and let the network stack
deal with them eventually would be a good idea (on those special cases).
There's an obvious disadvantage (which imho is terrible) in this approach.
Any future functions that we add in the core networking code, will need to
keep that in mindxi,  and unmap some random driver memory  if they start
playing tricks with the skb and their fragments. IOW I think this is very
fragile.

> 
> > pool counters and that doesn't feel right.  We first need to prove the race
> > can actually happen, before starting to change things.
> 
> As the network stack is adding a lot of performance improvement, such as
> sockmap for BPF, which may cause problem for them, will dig more to prove
> that.
> 

Ok that's something we need to look at.  Are those buffers freed eventually
by skb_free_head() etc?

Regards
/Ilias




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux