On 21/01/2021 01:54, Xiaoguang Wang wrote: > hi Pavel, > >> On 20/01/2021 08:11, Joseph Qi wrote: >>> Abaci reported the following BUG: >>> >>> [ 27.629441] BUG: sleeping function called from invalid context at fs/file.c:402 >>> [ 27.631317] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1012, name: io_wqe_worker-0 >>> [ 27.633220] 1 lock held by io_wqe_worker-0/1012: >>> [ 27.634286] #0: ffff888105e26c98 (&ctx->completion_lock){....}-{2:2}, at: __io_req_complete.part.102+0x30/0x70 >>> [ 27.636487] irq event stamp: 66658 >>> [ 27.637302] hardirqs last enabled at (66657): [<ffffffff8144ba02>] kmem_cache_free+0x1f2/0x3b0 >>> [ 27.639211] hardirqs last disabled at (66658): [<ffffffff82003a77>] _raw_spin_lock_irqsave+0x17/0x50 >>> [ 27.641196] softirqs last enabled at (64686): [<ffffffff824003c5>] __do_softirq+0x3c5/0x5aa >>> [ 27.643062] softirqs last disabled at (64681): [<ffffffff8220108f>] asm_call_irq_on_stack+0xf/0x20 >>> [ 27.645029] CPU: 1 PID: 1012 Comm: io_wqe_worker-0 Not tainted 5.11.0-rc4+ #68 >>> [ 27.646651] Hardware name: Alibaba Cloud Alibaba Cloud ECS, BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 >>> [ 27.649249] Call Trace: >>> [ 27.649874] dump_stack+0xac/0xe3 >>> [ 27.650666] ___might_sleep+0x284/0x2c0 >>> [ 27.651566] put_files_struct+0xb8/0x120 >>> [ 27.652481] __io_clean_op+0x10c/0x2a0 >>> [ 27.653362] __io_cqring_fill_event+0x2c1/0x350 >>> [ 27.654399] __io_req_complete.part.102+0x41/0x70 >>> [ 27.655464] io_openat2+0x151/0x300 >>> [ 27.656297] io_issue_sqe+0x6c/0x14e0 >>> [ 27.657170] ? lock_acquire+0x31a/0x440 >>> [ 27.658068] ? io_worker_handle_work+0x24e/0x8a0 >>> [ 27.659119] ? find_held_lock+0x28/0xb0 >>> [ 27.660026] ? io_wq_submit_work+0x7f/0x240 >>> [ 27.660991] io_wq_submit_work+0x7f/0x240 >>> [ 27.661915] ? trace_hardirqs_on+0x46/0x110 >>> [ 27.662890] io_worker_handle_work+0x501/0x8a0 >>> [ 27.663917] ? io_wqe_worker+0x135/0x520 >>> [ 27.664836] io_wqe_worker+0x158/0x520 >>> [ 27.665719] ? __kthread_parkme+0x96/0xc0 >>> [ 27.666663] ? io_worker_handle_work+0x8a0/0x8a0 >>> [ 27.667726] kthread+0x134/0x180 >>> [ 27.668506] ? kthread_create_worker_on_cpu+0x90/0x90 >>> [ 27.669641] ret_from_fork+0x1f/0x30 >>> >>> It blames we call cond_resched() with completion_lock when clean >>> request. In fact we will do it during flush overflow and it seems we >>> have no reason to do it before. So just remove io_clean_op() in >>> __io_cqring_fill_event() to fix this BUG. >> >> Nope, it would be broken. You may override, e.g. iov pointer >> that is dynamically allocated, and the function makes sure all >> those are deleted and freed. Most probably there will be problems >> on flush side as well. > Could you please explain more why this is a problem? > io_clean_op justs does some clean work, free allocated memory, put file, etc, > and these jobs should can be done in __io_cqring_overflow_flush(): struct io_kiocb { union { struct file *file; struct io_rw rw; ... /* use only after cleaning per-op data, see io_clean_op() */ struct io_completion compl; }; }; io_clean_op() cleans everything in first 64B (not only), and that space is used for overflow lists, etc. io_clean_op(req); req->compl.cflags = cflags; ----- list_add_tail(&req->compl.list, &ctx->cq_overflow_list); ----- That's the reason why we need to call it. A bit different story is why it does drop_files(). One time it was in io_req_clean_work(), which is called without locks held, but there were nasty races with cancellations of overflowed reqs, so it was much easier to move into io_clean_op(), so we just don't ever have requests with ->files in overflowed lists. As we just changed that cancellation scheme, those races are not existent anymore, and it could be moved back as in the diff. > while (!list_empty(&list)) { > req = list_first_entry(&list, struct io_kiocb, compl.list); > list_del(&req->compl.list); > io_put_req(req); // will call io_clean_op > } -- Pavel Begunkov