RE: [PATCH RFC v7 10/12] megaraid_sas: switch fusion adapters to MQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Wed, Jul 22, 2020 at 11:00:45AM +0530, Kashyap Desai wrote:
> > > On Tue, Jul 21, 2020 at 12:23:39PM +0530, Kashyap Desai wrote:
> > > > > > >
> > > > > > > Perf top (shared host tag. IOPS = 230K)
> > > > > > >
> > > > > > > 13.98%  [kernel]        [k] sbitmap_any_bit_set
> > > > > > >      6.43%  [kernel]        [k] blk_mq_run_hw_queue
> > > > > >
> > > > > > blk_mq_run_hw_queue function take more CPU which is called
from
> "
> > > > > > scsi_end_request"
> > > > >
> > > > > The problem could be that nr_hw_queues is increased a lot so
> > > > > that sample
> > > > on
> > > > > blk_mq_run_hw_queue() can be observed now.
> > > >
> > > > Yes. That is correct.
> > > >
> > > > >
> > > > > > It looks like " blk_mq_hctx_has_pending" handles only elevator
> > > > > > (scheduler) case. If  queue has ioscheduler=none, we can skip.
> > > > > > I case of scheduler=none, IO will be pushed to hardware queue
> > > > > > and it by pass
> > > > > software queue.
> > > > > > Based on above understanding, I added below patch and I can
> > > > > > see performance scale back to expectation.
> > > > > >
> > > > > > Ming mentioned that - we cannot remove blk_mq_run_hw_queues()
> > > from
> > > > > > IO completion path otherwise we may see IO hang. So I have
> > > > > > just modified completion path assuming it is only required for
> > > > > > IO
> > scheduler
> > > case.
> > > > > > https://www.spinics.net/lists/linux-block/msg55049.html
> > > > > >
> > > > > > Please review and let me know if this is good or we have to
> > > > > > address with proper fix.
> > > > > >
> > > > > > diff --git a/block/blk-mq.c b/block/blk-mq.c index
> > > > > > 1be7ac5a4040..b6a5b41b7fc2 100644
> > > > > > --- a/block/blk-mq.c
> > > > > > +++ b/block/blk-mq.c
> > > > > > @@ -1559,6 +1559,9 @@ void blk_mq_run_hw_queues(struct
> > > > > request_queue
> > > > > > *q, bool async)
> > > > > >         struct blk_mq_hw_ctx *hctx;
> > > > > >         int i;
> > > > > >
> > > > > > +       if (!q->elevator)
> > > > > > +               return;
> > > > > > +
> > > > >
> > > > > This way shouldn't be correct, blk_mq_run_hw_queues() is still
> > > > > needed
> > > > for
> > > > > none because request may not be dispatched successfully by
> > > > > direct
> > issue.
> > > >
> > > > When block layer attempt posting request to h/w queue directly
> > > > (for
> > > > ioscheduler=none) and if it fails, it is calling
> > > > blk_mq_request_bypass_insert().
> > > > blk_mq_request_bypass_insert function will start the h/w queue
> > > > from submission context. Do we still have an issue if we skip
> > > > running hw queue from completion ?
> > >
> > > The thing is that we can't guarantee that direct issue or adding
> > > request
> > into
> > > hctx->dispatch is always done for MQ/none, for example, request
> > > hctx->still
> > > can be added to sw queue from blk_mq_flush_plug_list() when mq plug
> > > is applied.
> >
> > I see even blk_mq_sched_insert_requests() from blk_mq_flush_plug_list
> > make sure it run the h/w queue. If all the submission path which deals
> > with s/w queue make sure they run h/w queue, can't we remove
> > blk_mq_run_hw_queues() from scsi_end_request ?
>
> No, one purpose of blk_mq_run_hw_queues() is for rerun queue in case
that
> dispatch budget is running out of in submission path, and
sdev->device_busy
> is shared by all hw queues on this scsi device.
>
> I posted one patch for avoiding it in scsi_end_request() before, looks
it never
> lands upstream:
>
> https://lore.kernel.org/linux-block/20191118100640.3673-1-
> ming.lei@xxxxxxxxxx/

Ming - I think above patch will fix the issue of performance on VD.
I fix some hunk failure and ported to 5.8 kernel -
I am testing this patch on my setup. If you post V4, I will use that.

So far looks good.  I have reduced device queue depth so that I hit budget
busy code path frequently.

Kashyap


>
> Thanks,
> Ming



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux