Re: [patch 2/2] mm: page_alloc: drain pages remotely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 16, 2020 at 06:32:48PM +0200, Sebastian Andrzej Siewior wrote:
> On 2020-06-16 13:11:51 [-0300], Marcelo Tosatti wrote:
> > Remote draining of pages was removed from 5.6-rt.
> > 
> > Unfortunately its necessary for use-cases which have a busy spinning 
> > SCHED_FIFO thread on isolated CPU:
> > 
> > [ 7475.821066] INFO: task ld:274531 blocked for more than 600 seconds.
> > [ 7475.822157]       Not tainted 4.18.0-208.rt5.20.el8.x86_64 #1
> > [ 7475.823094] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
> > [ 7475.824392] ld              D    0 274531 274530 0x00084080
> > [ 7475.825307] Call Trace:
> > [ 7475.825761]  __schedule+0x342/0x850
> > [ 7475.826377]  schedule+0x39/0xd0
> > [ 7475.826923]  schedule_timeout+0x20e/0x410
> > [ 7475.827610]  ? __schedule+0x34a/0x850
> > [ 7475.828247]  ? ___preempt_schedule+0x16/0x18
> > [ 7475.828953]  wait_for_completion+0x85/0xe0
> > [ 7475.829653]  flush_work+0x11a/0x1c0
> > [ 7475.830313]  ? flush_workqueue_prep_pwqs+0x130/0x130
> > [ 7475.831148]  drain_all_pages+0x140/0x190
> > [ 7475.831803]  __alloc_pages_slowpath+0x3f8/0xe20
> > [ 7475.832571]  ? mem_cgroup_commit_charge+0xcb/0x510
> > [ 7475.833371]  __alloc_pages_nodemask+0x1ca/0x2b0
> > [ 7475.834134]  pagecache_get_page+0xb5/0x2d0
> > [ 7475.834814]  ? account_page_dirtied+0x11a/0x220
> > [ 7475.835579]  grab_cache_page_write_begin+0x1f/0x40
> > [ 7475.836379]  iomap_write_begin.constprop.44+0x1c1/0x370
> > [ 7475.837241]  ? iomap_write_end+0x91/0x290
> > [ 7475.837911]  iomap_write_actor+0x92/0x170
> > ...
> > 
> > So enable remote draining again.
> 
> Is upstream affected by this? And if not, why not?
> 
> > Index: linux-rt-devel/mm/page_alloc.c
> > ===================================================================
> > --- linux-rt-devel.orig/mm/page_alloc.c
> > +++ linux-rt-devel/mm/page_alloc.c
> > @@ -360,6 +360,16 @@ EXPORT_SYMBOL(nr_online_nodes);
> >  
> >  static DEFINE_LOCAL_IRQ_LOCK(pa_lock);
> >  
> > +#ifdef CONFIG_PREEMPT_RT
> > +# define cpu_lock_irqsave(cpu, flags)          \
> > +	local_lock_irqsave_on(pa_lock, flags, cpu)
> > +# define cpu_unlock_irqrestore(cpu, flags)     \
> > +	local_unlock_irqrestore_on(pa_lock, flags, cpu)
> > +#else
> > +# define cpu_lock_irqsave(cpu, flags)		local_irq_save(flags)
> > +# define cpu_unlock_irqrestore(cpu, flags)	local_irq_restore(flags)
> > +#endif
> 
> This is going to be tough. I removed the cross-CPU local-locks from RT
> because it does something different for !RT. Furthermore we have
> local_locks in upstream as of v5.8-rc1, see commit
>    91710728d1725 ("locking: Introduce local_lock()")
> 
> so whatever happens here should have upstream blessing or I will be
> forced to drop the patch again while moving forward.

Understood. 

> Before this, I looked for cases where remote drain is useful / needed
> and didn't find one. 

Just pointed out one.

> I talked to Frederick and for the NO_HZ_FULL people
> it is not a problem because they don't go to kernel and so they never
> got anything on their per-CPU list.

People are using NOHZ_FULL CPUs to run both SCHED_FIFO realtime
workloads and normal workloads. Moreover, even with syscall-less
applications:

1) Setup application (malloc buffers, etc).
2) Set SCHED_FIFO priority.
3) sched_setaffinity() to NOHZ_FULL CPU.

Per-CPU buffers will be large and must be shrunk.

> We had this
>   https://lore.kernel.org/linux-mm/20190424111208.24459-1-bigeasy@xxxxxxxxxxxxx/

Will reply to that thread. Do you want to refresh/resend that patchset or
should I?




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux