Re: rcu pending

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 19, 2024 at 07:37:55PM -0400, Kent Overstreet wrote:
> On Mon, Aug 19, 2024 at 04:10:23PM GMT, Paul E. McKenney wrote:
> > On Mon, Aug 19, 2024 at 01:09:09PM -0400, Kent Overstreet wrote:
> > > On Mon, Aug 19, 2024 at 08:49:26AM GMT, Paul E. McKenney wrote:
> > > > On Mon, Aug 19, 2024 at 11:14:02AM -0400, Kent Overstreet wrote:
> > > > By "number of outstanding grace period sequence numbers" you mean
> > > > the number of outstanding grace period sequence numbers that have
> > > > memory blocks that have not yet been processed?  If so, how could that
> > > > possibly matter more than the total number of memory blocks that have
> > > > not been processed, regardless of which grace-period number they are
> > > > associated with?  Why would a huge number of memory blocks fail to
> > > > cause an OOM simply because they happen to be associated with a single
> > > > grace-period number?  Or, to put it another way, suppose that same number
> > > > of memory blocks were distributed over a large number of grace-period
> > > > sequence numbers?  How could this possibly cause an OOM to be more likely
> > > > than if the same nubmer of memory blocks were associated with a single
> > > > grace-period number?
> > > 
> > > We just want a callback every time one of those grace periods expire, so
> > > the pending objects can be freed as soon as they're ready.
> > > 
> > > This isn't _fatal_ for the kvfree_rcu() backend, since we have memory
> > > reclaim to fall back on, but it would still definitely be preferable for
> > > rcu_pending to be getting the notification from core RCU and avoid the
> > > more expensive memory reclaim path.
> > > 
> > > It is more critical if we want to use this for a faster call_rcu()
> > > backend.
> > 
> > I am still missing why the call to process_finished_items() from
> > __rcu_pending_enqueue() cannot cover this.  Given the way that RCU works,
> > that has the potential to notify you *before* RCU's grace-period kthread
> > would get around to doing so.
> 
> Not with any kind of bounds, since there's no predicting if that
> enqueue() call will happen, and not all potential users can process
> pending objects in enqueue().

And if that enqueue() never does happen, then the RCU callback is
there for you.  But in that case, the flood of callbacks must have
stopped.

> Like I said, this isn't the most critical issue, I'm mainly thinking
> about if we want to use this to get rid of linked list overhead for
> call_rcu() processing itself.

In the common case for kfree_rcu(), the linked-list overhead is already
gone due to the pages of pointers.  You get up to 500+ pointers nicely
adjacent in one page, so that the linked-list overhead going from one
page to the next is way down in the noise.

							Thanx, Paul




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux