Re: [PATCH RFC v7 10/12] megaraid_sas: switch fusion adapters to MQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 29, 2020 at 10:55:22AM +0530, Kashyap Desai wrote:
> > On Tue, Jul 28, 2020 at 08:54:27AM +0100, John Garry wrote:
> > > On 24/07/2020 03:47, Ming Lei wrote:
> > > > On Thu, Jul 23, 2020 at 06:29:01PM +0100, John Garry wrote:
> > > > > > > As I see, since megaraid will have 1:1 mapping of CPU to hw
> > > > > > > queue, will there only ever possibly a single bit set in
> > > > > > > ctx_map? If so, it seems a waste to always check every sbitmap
> > > > > > > map. But adding logic for this may negate any possible gains.
> > > > > >
> > > > > > It really depends on min and max cpu id in the map, then sbitmap
> > > > > > depth can be reduced to (max - min + 1). I'd suggest to double
> > > > > > check that cost of sbitmap_any_bit_set() really matters.
> > > > >
> > > > > Hi Ming,
> > > > >
> > > > > I'm not sure that reducing the search range would help much, as we
> > > > > still need to load some indexes of map[], and at best this may be
> > > > > reduced from 2/3
> > > > > -> 1 elements, depending on nr_cpus.
> > > >
> > > > I believe you misunderstood my idea, and you have to think it from
> > > > implementation viewpoint.
> > > >
> > > > The only workable way is to store the min cpu id as 'offset' and set
> > > > the sbitmap depth as (max - min + 1), isn't it? Then the actual cpu
> > > > id can be figured out via 'offset' + nr_bit. And the whole indexes
> > > > are just spread on the actual depth. BTW, max & min is the max / min
> > > > cpu id in hctx->cpu_map. So we can improve not only on 1:1, and I
> > > > guess most of MQ cases can benefit from the change, since it
> shouldn't be
> > usual for one ctx_map to cover both 0 & nr_cpu_id - 1.
> > > >
> > > > Meantime, we need to allocate the sbitmap dynamically.
> > >
> > > OK, so dynamically allocating the sbitmap could be good. I was
> > > thinking previously that we still allocate for nr_cpus size, and
> > > search a limited range - but this would have heavier runtime overhead.
> > >
> > > So if you really think that this may have some value, then let me
> > > know, so we can look to take it forward.
> >
> > Forget to mention, the in-tree code has been this shape for long time,
> please
> > see sbitmap_resize() called from blk_mq_map_swqueue().
> >
> > Another update is that V4 of 'scsi: core: only re-run queue in
> > scsi_end_request() if device queue is busy' is quite hard to implement
> since
> > commit b4fd63f42647110c9 ("Revert "scsi: core: run queue if SCSI device
> > queue isn't ready and queue is idle").
> 
> Ming -
> 
> Update from my testing. I found only one case of IO stall. I can discuss
> this specific topic if you like to send separate patch. It is too much
> interleaved discussion in this thread.
> 
> I noted you mentioned that V4 of 'scsi: core: only re-run queue in
> scsi_end_request() if device queue is busy' need underlying support of
> "scsi: core: run queue if SCSI device queue isn't ready and queue is idle"
> patch which is already reverted in mainline.

Right.

> Overall idea of running h/w queues conditionally in your patch " scsi:
> core: only re-run queue in scsi_end_request" is still worth. There can be

I agree.

> some race if we use this patch and for that you have concern. Am I
> correct. ?

If the patch of "scsi: core: run queue if SCSI device queue isn't ready and
queue is idle" is re-added, the approach should work. However, it looks a bit
complicated, and I was thinking if one simpler approach can be figured out.

> 
> One of the race I found in my testing is fixed by below patch -
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 54f9015..bcfd33a 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -173,8 +173,10 @@ static int blk_mq_do_dispatch_ctx(struct
> blk_mq_hw_ctx *hctx)
>                 if (!sbitmap_any_bit_set(&hctx->ctx_map))
>                         break;
> 
> -               if (!blk_mq_get_dispatch_budget(hctx))
> +               if (!blk_mq_get_dispatch_budget(hctx)) {
> +                       blk_mq_delay_run_hw_queue(hctx,
> BLK_MQ_BUDGET_DELAY);
>                         break;
> +               }

Actually all hw queues need to be run, instead of this hctx, cause
the budget stuff is request queue wide.

> 
>                 rq = blk_mq_dequeue_from_ctx(hctx, ctx);
>                 if (!rq) {
> 
> 
> In my test setup, I have your V3 'scsi: core: only re-run queue in
> scsi_end_request() if device queue is busy' rebased on 5.8 which does not
> have
> "scsi: core: run queue if SCSI device queue isn't ready and queue is idle"
> since it is already reverted in mainline.

If you added the above patch, I believe you can remove the run queue in
scsi_end_request() unconditionally. However, the delay run queue may
degrade io performance.

Actually the re-run queue in scsi_end_request() is only for dequeuing
request from sw/scheduler queue. And no such issue if request stays in
hctx->dispatch list.

> 
> I have 24 SAS SSD and I reduced QD = 16 so that I hit budget contention
> frequently.  I am running ioscheduler=none.
> If hctx0 has 16 IO inflight (All those IO will be posted to h/q queue
> directly). Next IO (17th IO) will see budget contention and it will be
> queued into s/w queue.
> It is expected that queue will be kicked from scsi_end_request. It is
> possible that one of the  IO completed and it reduce sdev->device_busy,
> but it has not yet reach the code which will kicked the h/w queue.
> Releasing budget and restarting h/w queue is not atomic. At the same time,
> another IO (18th IO) from submission path get the budget and it will be
> posted from below path. This IO will reset sdev->restart and it will not
> allow h/w queue to be restarted from completion path. This will lead one

Maybe re-run queue is needed before resetting sdev->restart if sdev->restart is 1.


Thanks, 
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux