On 3/11/25 4:53 PM, syzbot wrote: > ====================================================== > WARNING: possible circular locking dependency detected > 6.14.0-rc5-syzkaller-g77c95b8c7a16 #0 Not tainted > ------------------------------------------------------ > syz.3.85/7036 is trying to acquire lock: > ffff0000cf4f89b8 (&vma->vm_lock->lock){++++}-{4:4}, at: vma_start_write include/linux/mm.h:770 [inline] > ffff0000cf4f89b8 (&vma->vm_lock->lock){++++}-{4:4}, at: vm_flags_set include/linux/mm.h:900 [inline] > ffff0000cf4f89b8 (&vma->vm_lock->lock){++++}-{4:4}, at: io_region_mmap io_uring/memmap.c:312 [inline] > ffff0000cf4f89b8 (&vma->vm_lock->lock){++++}-{4:4}, at: io_uring_mmap+0x37c/0x504 io_uring/memmap.c:339 > > but task is already holding lock: > ffff0000f51da8d8 (&ctx->mmap_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:201 [inline] > ffff0000f51da8d8 (&ctx->mmap_lock){+.+.}-{4:4}, at: io_uring_mmap+0x100/0x504 io_uring/memmap.c:325 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #9 (&ctx->mmap_lock){+.+.}-{4:4}: > __mutex_lock_common+0x1f0/0x24b8 kernel/locking/mutex.c:585 > __mutex_lock kernel/locking/mutex.c:730 [inline] > mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:782 > class_mutex_constructor include/linux/mutex.h:201 [inline] > io_uring_get_unmapped_area+0x84/0x348 io_uring/memmap.c:357 > __get_unmapped_area+0x1d8/0x364 mm/mmap.c:846 > do_mmap+0x4a8/0x1150 mm/mmap.c:409 > vm_mmap_pgoff+0x228/0x3c4 mm/util.c:575 > ksys_mmap_pgoff+0x3a4/0x5c8 mm/mmap.c:607 > __do_sys_mmap arch/arm64/kernel/sys.c:28 [inline] > __se_sys_mmap arch/arm64/kernel/sys.c:21 [inline] > __arm64_sys_mmap+0xf8/0x110 arch/arm64/kernel/sys.c:21 > __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] > invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49 > el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132 > do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151 > el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744 > el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762 > el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600 Not sure I see how this isn't either happening all the time, or happening at all... But in any case, it seems trivial to move the vma lock outside of a dependency with the ctx mmap_lock, we can just set VM_DONTEXPAND upfront. Yes that'll leave it set in case we fail, which should be fine as far as I can tell (though it'd be trivial to clear it again). diff --git a/io_uring/memmap.c b/io_uring/memmap.c index 76fcc79656b0..aeaf4be48838 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -311,7 +311,6 @@ static int io_region_mmap(struct io_ring_ctx *ctx, { unsigned long nr_pages = min(mr->nr_pages, max_pages); - vm_flags_set(vma, VM_DONTEXPAND); return vm_insert_pages(vma, vma->vm_start, mr->pages, &nr_pages); } @@ -324,6 +323,8 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) struct io_mapped_region *region; void *ptr; + vm_flags_set(vma, VM_DONTEXPAND); + guard(mutex)(&ctx->mmap_lock); ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz); -- Jens Axboe