On 2022/01/26 0:47, Christoph Hellwig wrote: > I'm sometimes a little slow, but I still fail to understand why we need > all this. Your cut down patch that moves the destroy_workqueue call > and the work_struct fixed all the know lockdep issues, right? Right. Moving destroy_workqueue() (and blk_mq_freeze_queue() together) in __loop_clr_fd() to WQ context fixed a known lockdep issue. > > And the only other problem we're thinking off is that blk_mq_freeze_queue > could have the same effect, Right. blk_mq_freeze_queue() is still called with disk->open_mutex held, for there is } else if (lo->lo_state == Lo_bound) { /* * Otherwise keep thread (if running) and config, * but flush possible ongoing bios in thread. */ blk_mq_freeze_queue(lo->lo_queue); blk_mq_unfreeze_queue(lo->lo_queue); } path in lo_release(). > except that lockdep doesn't track it and > we've not seen it in the wild. It is difficult to test. Fuzzers cannot test fsfreeze paths, for failing to issue an unfreeze request leads to unresponding virtual machines. > > As far as I can tell we do not need the freeze at all for given that > by the time release is called I/O is quiesced. Why? lo_release() is called when close() is called. But (periodically-scheduled or triggered-on-demand) writeback of previously executed buffered write() calls can start while lo_release() or __loop_clr_fd() is running. Then, why not to wait for I/O requests to complete? Isn't that the reason of } else if (lo->lo_state == Lo_bound) { /* * Otherwise keep thread (if running) and config, * but flush possible ongoing bios in thread. */ blk_mq_freeze_queue(lo->lo_queue); blk_mq_unfreeze_queue(lo->lo_queue); } path in lo_release() being there?