Re: sorting out the freeze / quiesce mess

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 10, 2021 at 01:58:56PM +0100, Christoph Hellwig wrote:
> On Wed, Nov 10, 2021 at 05:29:26PM +0800, Ming Lei wrote:
> > On Wed, Nov 10, 2021 at 10:14:07AM +0100, Christoph Hellwig wrote:
> > > Hi Jens and Ming,
> > > 
> > > I've been looking into properly supporting queue freezing for bio based
> > > drivers (that is only release q_usage_counter on bio completion for them).
> > > And the deeper I look into the code the more I'm confused by us having
> > > the blk_mq_quiesce* interface in addition to blk_freeze_queue.  What
> > > is a good reason to do a quiesce separately from a freeze?
> > 
> > freeze can make sure that all requests are done, quiesce can make sure that
> > dispatch critical area(covered by hctx lock/unlock) is done.
> 
> Yeah, but why do we need to still call quiesce after we just did a
> freeze, which is about half of the users?

Because the caller need to make sure that dispatch critical area is
gone, otherwise dispatch code path may see intermediate state of the
change, such as switching elevator, and __blk_mq_update_nr_hw_queues()
may need quiesce too, I remember that there was kernel panic log in
some test.

Please see the point in commit 662156641bc4 ("block: don't drain
in-progress dispatch in blk_cleanup_queue()").


Thanks,
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux