Re: [PATCH 5/6] sched_ext: idle: Per-node idle cpumasks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 11, 2025 at 03:34:11PM +0100, Andrea Righi wrote:
> On Tue, Feb 11, 2025 at 09:19:52AM -0500, Yury Norov wrote:
> > On Tue, Feb 11, 2025 at 10:50:46AM +0100, Andrea Righi wrote:
> > > On Tue, Feb 11, 2025 at 08:41:45AM +0100, Andrea Righi wrote:
> > > > On Tue, Feb 11, 2025 at 08:32:51AM +0100, Andrea Righi wrote:
> > > > > On Mon, Feb 10, 2025 at 11:57:42AM -0500, Yury Norov wrote:
> > > > > ...
> > > > > > > > +/*
> > > > > > > > + * Find the best idle CPU in the system, relative to @node.
> > > > > > > > + */
> > > > > > > > +s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
> > > > > > > > +{
> > > > > > > > +	nodemask_t unvisited = NODE_MASK_ALL;
> > > > > > 
> > > > > > This should be a NODEMASK_ALLOC(). We don't want to eat up too much of the
> > > > > > stack, right?
> > > > > 
> > > > > Ok, and if I want to initialize unvisited to all online nodes, is there a
> > > > > better than doing:
> > > > > 
> > > > >   nodemask_clear(*unvisited);
> > > > >   nodemask_or(*unvisited, *unvisited, node_states[N_ONLINE]);
> > > > > 
> > > > > We don't have nodemask_copy() right?
> > > > 
> > > > Sorry, and with that I mean nodes_clear() / nodes_or() / nodes_copy().
> > > 
> > > Also, it might be problematic to use NODEMASK_ALLOC() here, since we're
> > > potentially holding raw spinlocks. Maybe we could use per-cpu nodemask_t,
> > > but then we need to preempt_disable() the entire loop, since
> > > scx_pick_idle_cpu() can be be called potentially from any context.
> > > 
> > > Considering that the maximum value for NODE_SHIFT is 10 with CONFIG_MAXSMP,
> > > nodemask_t should be 128 bytes at most, that doesn't seem too bad... Maybe
> > > we can accept to have it on the stack in this case?
> > 
> > If you expect calling this in strict SMP lock-held or IRQ contexts, You
> > need to be careful about stack overflow even mode. We've got GFP_ATOMIC
> > for that:
> >      non sleeping allocation with an expensive fallback so it can access
> >      some portion of memory reserves. Usually used from interrupt/bottom-half
> >      context with an expensive slow path fallback.
> > 
> > Check Documentation/core-api/memory-allocation.rst for other options.
> > You may be interested in __GFP_NORETRY as well.
> 
> I know about GFP_ATOMIC, but even with that I'm hitting some bugs.
> Will try with __GFP_NORETRY.

...which is basically this (with GFP_ATOMIC):

[   11.829079] =============================
[   11.829109] [ BUG: Invalid wait context ]
[   11.829146] 6.13.0-virtme #51 Not tainted
[   11.829185] -----------------------------
[   11.829243] fish/344 is trying to lock:
[   11.829285] ffff9659bec450b0 (&c->lock){..-.}-{3:3}, at: ___slab_alloc+0x66/0x1510
[   11.829380] other info that might help us debug this:
[   11.829450] context-{5:5}
[   11.829494] 8 locks held by fish/344:
[   11.829534]  #0: ffff965a409c70a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x28/0x60
[   11.829643]  #1: ffff965a409c7130 (&tty->atomic_write_lock){+.+.}-{4:4}, at: file_tty_write.isra.0+0xa1/0x330
[   11.829765]  #2: ffff965a409c72e8 (&tty->termios_rwsem/1){++++}-{4:4}, at: n_tty_write+0x9e/0x510
[   11.829871]  #3: ffffbc6d01433380 (&ldata->output_lock){+.+.}-{4:4}, at: n_tty_write+0x1f1/0x510
[   11.829979]  #4: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: __queue_work+0x59/0x680
[   11.830173]  #5: ffff9659800f0018 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0xd7/0x680
[   11.830286]  #6: ffff9659801bcf60 (&p->pi_lock){-.-.}-{2:2}, at: try_to_wake_up+0x56/0x920
[   11.830396]  #7: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: scx_select_cpu_dfl+0x56/0x460

And I think that's because:

 * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
 * watermark is applied to allow access to "atomic reserves".
 * The current implementation doesn't support NMI and few other strict
 * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT.

So I guess we the only viable option is to preallocate nodemask_t and
protect it somehow, hoping that it doesn't add too much overhead...

-Andrea




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux