>>On Thu, Apr 06, 2023 at 06:37:53AM +0200, Uladzislau Rezki wrote: > On Thu, Apr 06, 2023 at 08:12:38AM +0800, Zqiang wrote: > > Currently, in kfree_rcu_shrink_scan(), the drain_page_cache() is > > executed before kfree_rcu_monitor() to drain page cache, if the bnode > > structure's->gp_snap has done, the kvfree_rcu_bulk() will fill the > > page cache again in kfree_rcu_monitor(), this commit add a check > > for krcp structure's->backoff_page_cache_fill in put_cached_bnode(), > > if the krcp structure's->backoff_page_cache_fill is set, prevent page > > cache growing. > > > > Signed-off-by: Zqiang <qiang1.zhang@xxxxxxxxx> > > --- > > kernel/rcu/tree.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 9cc0a7766fd2..f25430ae1936 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -2907,6 +2907,8 @@ static inline bool > > put_cached_bnode(struct kfree_rcu_cpu *krcp, > > struct kvfree_rcu_bulk_data *bnode) > > { > > + if (atomic_read(&krcp->backoff_page_cache_fill)) > > + return false; > > // Check the limit. > > if (krcp->nr_bkv_objs >= rcu_min_cached_objs) > > return false; > > -- > > 2.32.0 > > > Reviewed-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > >Thank you both! > >One question, though. Might it be better to instead modify the "for" >loop in fill_page_cache_func() to start at krcp->nr_bkv_objs instead >of starting at zero? That way, we still provide a single page under >low-memory conditions, but provide rcu_min_cached_objs of them if memory >is plentiful. > >Alternatively, if we really don't want to allow any pages at all under >low-memory conditions, shouldn't the fill_page_cache_func() set nr_pages >to zero (instead of the current 1) when the krcp->backoff_page_cache_fill >flag is set? Hi, Paul If the krcp->backoff_page_cache_fill is true, the put_cached_bnode () return false, the allocated single page will also be freed in fill_page_cache_func(). or it would be better not to allocate under memory pressure. How about like this? diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 9cc0a7766fd2..94aedbc3da36 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2907,6 +2907,8 @@ static inline bool put_cached_bnode(struct kfree_rcu_cpu *krcp, struct kvfree_rcu_bulk_data *bnode) { + if (atomic_read(&krcp->backoff_page_cache_fill)) + return false; // Check the limit. if (krcp->nr_bkv_objs >= rcu_min_cached_objs) return false; @@ -3220,7 +3222,7 @@ static void fill_page_cache_func(struct work_struct *work) int i; nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ? - 1 : rcu_min_cached_objs; + 0 : rcu_min_cached_objs; for (i = 0; i < nr_pages; i++) { bnode = (struct kvfree_rcu_bulk_data *) Thanks Zqiang >This would likely mean also breaking out of that loop if >krcp->backoff_page_cache_fill was set in the meantime (which happens >implicitly with Zqiang's patch). > >Or am I missing something subtle here? > > Thanx, Paul