On Sun, Apr 3, 2022 at 8:50 PM Vaibhav Jain <vaibhav@xxxxxxxxxxxxx> wrote: > > > Apologies for the delayed response, > > Yosry Ahmed <yosryahmed@xxxxxxxxxx> writes: > > > On Fri, Apr 1, 2022 at 1:39 AM Vaibhav Jain <vaibhav@xxxxxxxxxxxxx> wrote: > >> > >> > >> Yosry Ahmed <yosryahmed@xxxxxxxxxx> writes: > >> > From: Shakeel Butt <shakeelb@xxxxxxxxxx> > >> > > >> > Introduce an memcg interface to trigger memory reclaim on a memory cgroup. > >> <snip> > >> > >> > + > >> > + while (nr_reclaimed < nr_to_reclaim) { > >> > + unsigned long reclaimed; > >> > + > >> > + if (signal_pending(current)) > >> > + break; > >> > + > >> > + reclaimed = try_to_free_mem_cgroup_pages(memcg, > >> > + nr_to_reclaim - nr_reclaimed, > >> > + GFP_KERNEL, true); > >> > + > >> > + if (!reclaimed && !nr_retries--) > >> > + break; > >> > + > >> > + nr_reclaimed += reclaimed; > >> > >> I think there should be a cond_resched() in this loop before > >> try_to_free_mem_cgroup_pages() to have better chances of reclaim > >> succeding early. > >> > > Thanks for taking the time to look at this! > > > > I believe this loop is modeled after the loop in memory_high_write() > > for the memory.high interface. Is there a reason why it should be > > needed here but not there? > > > > memory_high_write() calls drain_all_stock() atleast once before calling > try_to_free_mem_cgroup_pages(). This would drain all percpu stocks > for the given memcg and its descendents, giving a high chance > try_to_free_mem_cgroup_pages() to succeed quickly. Such a functionality > is missing from this patch. > > Adding a cond_resched() would atleast give chance to other processess > within the memcg to run and make forward progress thereby making more > pages available for reclaim. > > Suggestion is partly based on __perform_reclaim() issues a cond_resche() > as it may get called repeatedly during direct reclaim path. > As Michal pointed out, there is already a call to cond_resched() in shrink_node_memcgs(). > > >> <snip> > >> > >> -- > >> Cheers > >> ~ Vaibhav > > > > -- > Cheers > ~ Vaibhav