Re: [PATCH RFC v7 10/12] megaraid_sas: switch fusion adapters to MQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 20, 2020 at 12:53:55PM +0530, Kashyap Desai wrote:
> > > > I also noticed nr_hw_queues are now exposed in sysfs -
> > > >
> > > >
> > /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/0000:87:04.0/0000:8b
> > > >
> > >
> > :00.0/0000:8c:00.0/0000:8d:00.0/host14/scsi_host/host14/nr_hw_queues:1
> > > > 28
> > > > .
> > >
> > > That's on my v8 wip branch, so I guess you're picking it up from there.
> >
> > John - I did more testing on v8 wip branch.  CPU hotplug is working as
> > expected, but I still see some performance issue on Logical Volumes.
> >
> > I created 8 Drives Raid-0 VD on MR controller and below is performance
> > impact of this RFC. Looks like contention is on single <sdev>.
> >
> > I used command - "numactl -N 1  fio 1vd.fio --iodepth=128 --bs=4k --
> > rw=randread --cpus_allowed_policy=split --ioscheduler=none --
> > group_reporting --runtime=200 --numjobs=1"
> > IOPS without RFC = 300K IOPS with RFC = 230K.
> >
> > Perf top (shared host tag. IOPS = 230K)
> >
> > 13.98%  [kernel]        [k] sbitmap_any_bit_set
> >      6.43%  [kernel]        [k] blk_mq_run_hw_queue
> 
> blk_mq_run_hw_queue function take more CPU which is called from "
> scsi_end_request"

The problem could be that nr_hw_queues is increased a lot so that
sample on blk_mq_run_hw_queue() can be observed now.

> It looks like " blk_mq_hctx_has_pending" handles only elevator (scheduler)
> case. If  queue has ioscheduler=none, we can skip. I case of scheduler=none,
> IO will be pushed to hardware queue and it by pass software queue.
> Based on above understanding, I added below patch and I can see performance
> scale back to expectation.
> 
> Ming mentioned that - we cannot remove blk_mq_run_hw_queues() from IO
> completion path otherwise we may see IO hang. So I have just modified
> completion path assuming it is only required for IO scheduler case.
> https://www.spinics.net/lists/linux-block/msg55049.html
> 
> Please review and let me know if this is good or we have to address with
> proper fix.
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 1be7ac5a4040..b6a5b41b7fc2 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1559,6 +1559,9 @@ void blk_mq_run_hw_queues(struct request_queue *q,
> bool async)
>         struct blk_mq_hw_ctx *hctx;
>         int i;
> 
> +       if (!q->elevator)
> +               return;
> +

This way shouldn't be correct, blk_mq_run_hw_queues() is still needed for
none because request may not be dispatched successfully by direct issue.


Thanks,
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux