Hello, request queue quiescing has been applied in lots of block drivers and block core from different/un-related code paths. So far, both quiesce and unquiesce changes the queue state unconditionally. This way has caused trouble, such as, driver is quiescing queue for its own purpose, however block core's queue quiesce may come because of elevator switch, updating nr_requests or other queue attributes, then un-expected unquiesce may come too early. It has been observed kernel panic when running stress test on dm-mpath suspend and updating nr_requests. Fix the issue by supporting nested queue quiescing. But nvme has very complicated uses on quiesce/unquiesce, which two may not be called in pairing, so switch to this way in patch 1~4, and patch 5 provides nested queue quiesce. Ming Lei (5): nvme: add APIs for stopping/starting admin queue nvme: apply nvme API to quiesce/unquiesce admin queue nvme: prepare for pairing quiescing and unquiescing nvme: paring quiesce/unquiesce blk-mq: support nested blk_mq_quiesce_queue() block/blk-mq.c | 20 +++++-- drivers/nvme/host/core.c | 107 +++++++++++++++++++++++++++++-------- drivers/nvme/host/fc.c | 8 +-- drivers/nvme/host/nvme.h | 6 +++ drivers/nvme/host/pci.c | 8 +-- drivers/nvme/host/rdma.c | 14 ++--- drivers/nvme/host/tcp.c | 16 +++--- drivers/nvme/target/loop.c | 4 +- include/linux/blkdev.h | 2 + 9 files changed, 135 insertions(+), 50 deletions(-) -- 2.31.1