Hello, request queue quiescing has been applied in lots of block drivers and block core from different/un-related code paths. So far, both quiesce and unquiesce changes the queue state unconditionally. This way has caused trouble, such as, driver is quiescing queue for its own purpose, however block core's queue quiesce may come because of elevator switch, updating nr_requests or other queue attributes, then un-expected unquiesce may come too early. It has been observed kernel panic when running stress test on dm-mpath suspend and updating nr_requests. Fix the issue by supporting concurrent queue quiescing. But nvme has very complicated uses on quiesce/unquiesce, which two may not be called in pairing, so switch to this way in patch 1~4, and patch 5 provides nested queue quiesce. V4: - one small patch style change as suggested by Christoph, only patch 6/6 is touched V3: - add patch 5/6 to clear NVME_CTRL_ADMIN_Q_STOPPED for nvme-loop after reallocating admin queue - take Bart's suggestion to add warning in blk_mq_unquiesce_queue() & update commit log V2: - replace mutex with atomic ops for supporting paring quiesce & unquiesce Ming Lei (6): nvme: add APIs for stopping/starting admin queue nvme: apply nvme API to quiesce/unquiesce admin queue nvme: prepare for pairing quiescing and unquiescing nvme: paring quiesce/unquiesce nvme: loop: clear NVME_CTRL_ADMIN_Q_STOPPED after admin queue is reallocated blk-mq: support concurrent queue quiesce/unquiesce block/blk-mq.c | 22 ++++++++++-- drivers/nvme/host/core.c | 70 ++++++++++++++++++++++++++------------ drivers/nvme/host/fc.c | 8 ++--- drivers/nvme/host/nvme.h | 4 +++ drivers/nvme/host/pci.c | 8 ++--- drivers/nvme/host/rdma.c | 14 ++++---- drivers/nvme/host/tcp.c | 16 ++++----- drivers/nvme/target/loop.c | 6 ++-- include/linux/blkdev.h | 2 ++ 9 files changed, 100 insertions(+), 50 deletions(-) -- 2.31.1