On Tue, Feb 11, 2025 at 11:38:27AM -0500, Steven Rostedt wrote: > On Tue, 11 Feb 2025 15:45:15 +0100 > Andrea Righi <arighi@xxxxxxxxxx> wrote: > > > ...which is basically this (with GFP_ATOMIC): > > > > [ 11.829079] ============================= > > [ 11.829109] [ BUG: Invalid wait context ] > > [ 11.829146] 6.13.0-virtme #51 Not tainted > > [ 11.829185] ----------------------------- > > [ 11.829243] fish/344 is trying to lock: > > [ 11.829285] ffff9659bec450b0 (&c->lock){..-.}-{3:3}, at: ___slab_alloc+0x66/0x1510 > > [ 11.829380] other info that might help us debug this: > > [ 11.829450] context-{5:5} > > [ 11.829494] 8 locks held by fish/344: > > [ 11.829534] #0: ffff965a409c70a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x28/0x60 > > [ 11.829643] #1: ffff965a409c7130 (&tty->atomic_write_lock){+.+.}-{4:4}, at: file_tty_write.isra.0+0xa1/0x330 > > [ 11.829765] #2: ffff965a409c72e8 (&tty->termios_rwsem/1){++++}-{4:4}, at: n_tty_write+0x9e/0x510 > > [ 11.829871] #3: ffffbc6d01433380 (&ldata->output_lock){+.+.}-{4:4}, at: n_tty_write+0x1f1/0x510 > > [ 11.829979] #4: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: __queue_work+0x59/0x680 > > [ 11.830173] #5: ffff9659800f0018 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0xd7/0x680 > > [ 11.830286] #6: ffff9659801bcf60 (&p->pi_lock){-.-.}-{2:2}, at: try_to_wake_up+0x56/0x920 > > [ 11.830396] #7: ffffffffb556b5c0 (rcu_read_lock){....}-{1:3}, at: scx_select_cpu_dfl+0x56/0x460 > > > > And I think that's because: > > > > * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower > > * watermark is applied to allow access to "atomic reserves". > > * The current implementation doesn't support NMI and few other strict > > * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT. > > > > So I guess we the only viable option is to preallocate nodemask_t and > > protect it somehow, hoping that it doesn't add too much overhead... > > I believe it's because you have p->pi_lock which is a raw_spin_lock() and > you are trying to take a lock in ___slab_alloc() which I bet is a normal > spin_lock(). In PREEMPT_RT() that turns into a mutex, and you can not take > a spin_lock while holding a raw_spin_lock. Exactly that, thanks Steve. I'll run some tests using per-cpu nodemask_t, given that most of the times this is called with p->pi_lock held, it should be safe and we shouldn't introduce any overhead. -Andrea