Thanks for the quick patch! I just ran my reproducer 50x and did not observe any kernel panics (either in the ktest environment or on bare metal), so it does seem that resolves the issue on our end. -Mitchell Augustin On Sun, Mar 31, 2024 at 4:52 PM Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote: > > list_del_init_careful() needs to be the last access to the wait queue > entry - it effectively unlocks access. > > Previously, finish_wait() would see the empty list head and skip taking > the lock, and then we'd return - but the completion path would still > attempt to do the wakeup after the task_struct pointer had been > overwritten. > > Fixes: 71eb6b6b0ba9 fs/aio: obey min_nr when doing wakeups > Cc: linux-stable@xxxxxxxxxxxxxxx > Link: https://lore.kernel.org/linux-fsdevel/CAHTA-ubfwwB51A5Wg5M6H_rPEQK9pNf8FkAGH=vr=FEkyRrtqw@xxxxxxxxxxxxxx/ > Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx> > --- > fs/aio.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/aio.c b/fs/aio.c > index 9cdaa2faa536..0f4f531c9780 100644 > --- a/fs/aio.c > +++ b/fs/aio.c > @@ -1202,8 +1202,8 @@ static void aio_complete(struct aio_kiocb *iocb) > spin_lock_irqsave(&ctx->wait.lock, flags); > list_for_each_entry_safe(curr, next, &ctx->wait.head, w.entry) > if (avail >= curr->min_nr) { > - list_del_init_careful(&curr->w.entry); > wake_up_process(curr->w.private); > + list_del_init_careful(&curr->w.entry); > } > spin_unlock_irqrestore(&ctx->wait.lock, flags); > } > -- > 2.43.0 >