Re: [PATCH 0/3] improve nvme quiesce time for large amount of namespaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sun, Jul 31, 2022 at 01:23:36PM +0300, Sagi Grimberg wrote:
But maybe we can avoid that, and because we allocate
the connect_q ourselves, and fully know that it should
not be apart of the tagset quiesce, perhaps we can introduce
a new interface like:
--
static inline int nvme_ctrl_init_connect_q(struct nvme_ctrl *ctrl)
{
    ctrl->connect_q = blk_mq_init_queue_self_quiesce(ctrl->tagset);
    if (IS_ERR(ctrl->connect_q))
        return PTR_ERR(ctrl->connect_q);
    return 0;
}
--

And then blk_mq_quiesce_tagset can simply look into a per request-queue
self_quiesce flag and skip as needed.

I'd just make that a queue flag set after allocation to keep the
interface simple, but otherwise this seems like the right thing
to do.
Now the code used NVME_NS_STOPPED to avoid unpaired stop/start.
If we use blk_mq_quiesce_tagset, It will cause the above mechanism to fail.
I review the code, only pci can not ensure secure stop/start pairing.
So there is a choice, We only use blk_mq_quiesce_tagset on fabrics, not PCI.
Do you think that's acceptable?
If that's acceptable, I will try to send a patch set.

I don't think that this is acceptable. But I don't understand how
NVME_NS_STOPPED would change anything in the behavior of tagset-wide
quiesce?



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux