Re: [PATCH RFC v7 10/12] megaraid_sas: switch fusion adapters to MQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 21, 2020 at 12:23:39PM +0530, Kashyap Desai wrote:
> > > >
> > > > Perf top (shared host tag. IOPS = 230K)
> > > >
> > > > 13.98%  [kernel]        [k] sbitmap_any_bit_set
> > > >      6.43%  [kernel]        [k] blk_mq_run_hw_queue
> > >
> > > blk_mq_run_hw_queue function take more CPU which is called from "
> > > scsi_end_request"
> >
> > The problem could be that nr_hw_queues is increased a lot so that sample
> on
> > blk_mq_run_hw_queue() can be observed now.
> 
> Yes. That is correct.
> 
> >
> > > It looks like " blk_mq_hctx_has_pending" handles only elevator
> > > (scheduler) case. If  queue has ioscheduler=none, we can skip. I case
> > > of scheduler=none, IO will be pushed to hardware queue and it by pass
> > software queue.
> > > Based on above understanding, I added below patch and I can see
> > > performance scale back to expectation.
> > >
> > > Ming mentioned that - we cannot remove blk_mq_run_hw_queues() from IO
> > > completion path otherwise we may see IO hang. So I have just modified
> > > completion path assuming it is only required for IO scheduler case.
> > > https://www.spinics.net/lists/linux-block/msg55049.html
> > >
> > > Please review and let me know if this is good or we have to address
> > > with proper fix.
> > >
> > > diff --git a/block/blk-mq.c b/block/blk-mq.c index
> > > 1be7ac5a4040..b6a5b41b7fc2 100644
> > > --- a/block/blk-mq.c
> > > +++ b/block/blk-mq.c
> > > @@ -1559,6 +1559,9 @@ void blk_mq_run_hw_queues(struct
> > request_queue
> > > *q, bool async)
> > >         struct blk_mq_hw_ctx *hctx;
> > >         int i;
> > >
> > > +       if (!q->elevator)
> > > +               return;
> > > +
> >
> > This way shouldn't be correct, blk_mq_run_hw_queues() is still needed
> for
> > none because request may not be dispatched successfully by direct issue.
> 
> When block layer attempt posting request to h/w queue directly (for
> ioscheduler=none) and if it fails, it is calling
> blk_mq_request_bypass_insert().
> blk_mq_request_bypass_insert function will start the h/w queue from
> submission context. Do we still have an issue if we skip running hw queue
> from completion ?

The thing is that we can't guarantee that direct issue or adding request into
hctx->dispatch is always done for MQ/none, for example, request still
can be added to sw queue from blk_mq_flush_plug_list() when mq plug is
applied.

Also, I am not sure it is a good idea to add request into hctx->dispatch
via blk_mq_request_bypass_insert() in __blk_mq_try_issue_directly() in
case of running out of budget, because this way may hurt sequential IO
performance.

Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux