On Tue, Oct 12, 2021 at 12:36:20PM +0200, Christoph Hellwig wrote: > On Sat, Oct 09, 2021 at 11:47:11AM +0800, Ming Lei wrote: > > The current blk_mq_quiesce_queue() and blk_mq_unquiesce_queue() always > > stops and starts the queue unconditionally. And there can be concurrent > > quiesce/unquiesce coming from different unrelated code paths, so > > unquiesce may come unexpectedly and start queue too early. > > > > Prepare for supporting concurrent quiesce/unquiesce from multiple > > contexts, so that we can address the above issue. > > > > NVMe has very complicated quiesce/unquiesce use pattern, add one atomic > > bit for makeiing sure that blk-mq quiece/unquiesce is always called in > > pair. > > Can you explain the need for these bits a little more? If they are > unbalanced we should probably fix the root cause. > > What issues did you see? There are lots of unbalanced usage in nvme, such as 1) nvme pci: - nvme_dev_disable() can be called more than one times before starting reset, so multiple nvme_stop_queues() vs. single nvme_start_queues(). 2) Forcibly unquiesce queues in nvme_kill_queues() even though queues are never quiesced, and similar usage can be seen in tcp/fc/rdma too Once the quiesce and unquiesce are run from difference context, it becomes not easy to audit if the two is done in pair. Thanks, Ming