Re: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2022-05-13 at 16:04 +0100, Mel Gorman wrote:
> On Thu, May 12, 2022 at 12:37:43PM -0700, Andrew Morton wrote:
> > On Thu, 12 May 2022 09:50:43 +0100 Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote:
> > 
> > > From: Nicolas Saenz Julienne <nsaenzju@xxxxxxxxxx>
> > > 
> > > Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu
> > > drain work queued by __drain_all_pages(). So introduce a new mechanism to
> > > remotely drain the per-cpu lists. It is made possible by remotely locking
> > > 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme
> > > is that drain operations are now migration safe.
> > > 
> > > There was no observed performance degradation vs. the previous scheme.
> > > Both netperf and hackbench were run in parallel to triggering the
> > > __drain_all_pages(NULL, true) code path around ~100 times per second.
> > > The new scheme performs a bit better (~5%), although the important point
> > > here is there are no performance regressions vs. the previous mechanism.
> > > Per-cpu lists draining happens only in slow paths.
> > > 
> > > Minchan Kim tested this independently and reported;
> > > 
> > > 	My workload is not NOHZ CPUs but run apps under heavy memory
> > > 	pressure so they goes to direct reclaim and be stuck on
> > > 	drain_all_pages until work on workqueue run.
> > > 
> > > 	unit: nanosecond
> > > 	max(dur)        avg(dur)                count(dur)
> > > 	166713013       487511.77786438033      1283
> > > 
> > > 	From traces, system encountered the drain_all_pages 1283 times and
> > > 	worst case was 166ms and avg was 487us.
> > > 
> > > 	The other problem was alloc_contig_range in CMA. The PCP draining
> > > 	takes several hundred millisecond sometimes though there is no
> > > 	memory pressure or a few of pages to be migrated out but CPU were
> > > 	fully booked.
> > > 
> > > 	Your patch perfectly removed those wasted time.
> > 
> > I'm not getting a sense here of the overall effect upon userspace
> > performance.  As Thomas said last year in
> > https://lkml.kernel.org/r/87v92sgt3n.ffs@tglx
> > 
> > : The changelogs and the cover letter have a distinct void vs. that which
> > : means this is just another example of 'scratch my itch' changes w/o
> > : proper justification.
> > 
> > Is there more to all of this than itchiness and if so, well, you know
> > the rest ;)
> > 
> 
> I think Minchan's example is clear-cut.  The draining operation can take
> an arbitrary amount of time waiting for the workqueue to run on each CPU
> and can cause severe delays under reclaim or CMA and the patch fixes
> it. Maybe most users won't even notice but I bet phone users do if a
> camera app takes too long to open.
> 
> The first paragraphs was written by Nicolas and I did not want to modify
> it heavily and still put his Signed-off-by on it. Maybe it could have
> been clearer though because "too busy" is vague when the actual intent
> is to avoid interfering with RT tasks. Does this sound better to you?
> 
> 	Some setups, notably NOHZ_FULL CPUs, may be running realtime or
> 	latency-sensitive applications that cannot tolerate interference
> 	due to per-cpu drain work queued by __drain_all_pages(). Introduce
> 	a new mechanism to remotely drain the per-cpu lists. It is made
> 	possible by remotely locking 'struct per_cpu_pages' new per-cpu
> 	spinlocks. This has two advantages, the time to drain is more
> 	predictable and other unrelated tasks are not interrupted.
> 
> You raise a very valid point with Thomas' mail and it is a concern that
> the local_lock is no longer strictly local. We still need preemption to
> be disabled between the percpu lookup and the lock acquisition but that
> can be done with get_cpu_var() to make the scope clear.

This isn't going to work in RT :(

get_cpu_var() disables preemption hampering RT spinlock use. There is more to
it in Documentation/locking/locktypes.rst.

Regards,

-- 
Nicolás Sáenz






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux