Re: [PATCH V8 07/11] blk-mq: stop to handle IO and drain IO before hctx becomes inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 06, 2020 at 09:24:25AM +0800, Ming Lei wrote:
> On Tue, May 05, 2020 at 05:46:18PM +0200, Christoph Hellwig wrote:
> > On Thu, Apr 30, 2020 at 10:02:54PM +0800, Ming Lei wrote:
> > > BLK_MQ_S_INACTIVE is only set when the last cpu of this hctx is becoming
> > > offline, and blk_mq_hctx_notify_offline() is called from cpu hotplug
> > > handler. So if there is any request of this hctx submitted from somewhere,
> > > it has to this last cpu. That is done by blk-mq's queue mapping.
> > > 
> > > In case of direct issue, basically blk_mq_get_driver_tag() is run after
> > > the request is allocated, that is why I mentioned the chance of
> > > migration is very small.
> > 
> > "very small" does not cut it, it has to be zero.  And it seems the
> > new version still has this hack.
> 
> But smp_mb() is used for ordering the WRITE and READ, so it is correct.
> 
> barrier() is enough when process migration doesn't happen.

Without numbers I would just make the smp_mb() unconditional. Your
questionable optimisation trades that for a load of the CPU ID and a
conditional branch, which isn't obviously faster to me. It's also very
difficult to explain to people and relies on a bunch of implicit behaviour
(e.g. racing only with CPU-affine hotplug notifier).

If it turns out that the smp_mb() is worthwhile,  then I'd suggest improving
the comment, perhaps to include the litmus test I cooked previously.

Will



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux