3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Al Viro <viro@xxxxxxxxxxxxxxxxxx> commit 4faa99965e027cc057c5145ce45fa772caa04e8d upstream. If io_destroy() gets to cancelling everything that can be cancelled and gets to kiocb_cancel() calling the function driver has left in ->ki_cancel, it becomes vulnerable to a race with IO completion. At that point req is already taken off the list and aio_complete() does *NOT* spin until we (in free_ioctx_users()) releases ->ctx_lock. As the result, it proceeds to kiocb_free(), freing req just it gets passed to ->ki_cancel(). Fix is simple - remove from the list after the call of kiocb_cancel(). All instances of ->ki_cancel() already have to cope with the being called with iocb still on list - that's what happens in io_cancel(2). Cc: stable@xxxxxxxxxx Fixes: 0460fef2a921 "aio: use cancellation list lazily" Signed-off-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/aio.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/fs/aio.c +++ b/fs/aio.c @@ -560,9 +560,8 @@ static void free_ioctx_users(struct perc while (!list_empty(&ctx->active_reqs)) { req = list_first_entry(&ctx->active_reqs, struct kiocb, ki_list); - - list_del_init(&req->ki_list); kiocb_cancel(req); + list_del_init(&req->ki_list); } spin_unlock_irq(&ctx->ctx_lock);