Re: [PATCH] blk-mq: test QUEUE_FLAG_HCTX_ACTIVE for sbitmap_shared in hctx_may_queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 05, 2021 at 10:04:58AM +0000, John Garry wrote:
> On 05/01/2021 02:20, Ming Lei wrote:
> > On Mon, Jan 04, 2021 at 10:41:36AM +0000, John Garry wrote:
> > > On 27/12/2020 11:34, Ming Lei wrote:
> > > > In case of blk_mq_is_sbitmap_shared(), we should test QUEUE_FLAG_HCTX_ACTIVE against
> > > > q->queue_flags instead of BLK_MQ_S_TAG_ACTIVE.
> > > > 
> > > > So fix it.
> > > > 
> > > > Cc: John Garry<john.garry@xxxxxxxxxx>
> > > > Cc: Kashyap Desai<kashyap.desai@xxxxxxxxxxxx>
> > > > Fixes: f1b49fdc1c64 ("blk-mq: Record active_queues_shared_sbitmap per tag_set for when using shared sbitmap")
> > > > Signed-off-by: Ming Lei<ming.lei@xxxxxxxxxx>
> > > Reviewed-by: John Garry<john.garry@xxxxxxxxxx>
> > > 
> > > > ---
> > > >    block/blk-mq.h | 2 +-
> > > >    1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/block/blk-mq.h b/block/blk-mq.h
> > > > index c1458d9502f1..3616453ca28c 100644
> > > > --- a/block/blk-mq.h
> > > > +++ b/block/blk-mq.h
> > > > @@ -304,7 +304,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
> > > >    		struct request_queue *q = hctx->queue;
> > > >    		struct blk_mq_tag_set *set = q->tag_set;
> > > > -		if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &q->queue_flags))
> > > > +		if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
> > > I wonder how this ever worked properly, as BLK_MQ_S_TAG_ACTIVE is bit index
> > > 1, and for q->queue_flags that means QUEUE_FLAG_DYING bit, which I figure is
> > > not set normally..
> > It always return true, and might just take a bit more CPU especially the tag queue
> > depth of magsas_raid and hisi_sas_v3 is quite high.
> 
> Hi Ming,
> 
> Right, but we actually tested by hacking the host tag queue depth to be
> lower such that we should have tag contention, here is an extract from the
> original series cover letter for my results:
> 
> Tag depth 		4000 (default)		260**
> 
> Baseline (v5.9-rc1):
> none sched:		2094K IOPS		513K
> mq-deadline sched:	2145K IOPS		1336K
> 
> Final, host_tagset=0 in LLDD *, ***:
> none sched:		2120K IOPS		550K
> mq-deadline sched:	2121K IOPS		1309K
> 
> Final ***:
> none sched:		2132K IOPS		1185		
> mq-deadline sched:	2145K IOPS		2097	
> 
> Maybe my test did not expose the issue. Kashyap also tested this and
> reported the original issue such that we needed this feature, so I'm
> confused.

How many LUNs are involved in above test with 260 depth?


Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux