Re: BUG at IP: blk_mq_get_request+0x23e/0x390 on 4.16.0-rc7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 04/08/2018 03:57 PM, Ming Lei wrote:
On Sun, Apr 08, 2018 at 02:53:03PM +0300, Sagi Grimberg wrote:

Hi Sagi

Still can reproduce this issue with the change:

Thanks for validating Yi,

Would it be possible to test the following:
--
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 75336848f7a7..81ced3096433 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -444,6 +444,10 @@ struct request *blk_mq_alloc_request_hctx(struct
request_queue *q,
                   return ERR_PTR(-EXDEV);
           }
           cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
+       if (cpu >= nr_cpu_ids) {
+               pr_warn("no online cpu for hctx %d\n", hctx_idx);
+               cpu = cpumask_first(alloc_data.hctx->cpumask);
+       }
           alloc_data.ctx = __blk_mq_get_ctx(q, cpu);

           rq = blk_mq_get_request(q, NULL, op, &alloc_data);
--
...


[  153.384977] BUG: unable to handle kernel paging request at
00003a9ed053bd48
[  153.393197] IP: blk_mq_get_request+0x23e/0x390

Also would it be possible to provide gdb output of:

l *(blk_mq_get_request+0x23e)

nvmf_connect_io_queue() is used in this way by asking blk-mq to allocate
request from one specific hw queue, but there may not be all online CPUs
mapped to this hw queue.

Yes, this is what I suspect..

And the following patchset may fail this kind of allocation and avoid
the kernel oops.

	https://marc.info/?l=linux-block&m=152318091025252&w=2

Thanks Ming,

But I don't want to fail the allocation, nvmf_connect_io_queue simply
needs a tag to issue the connect request, I much rather to take this
tag from an online cpu than failing it... We use this because we reserve

The failure is only triggered when there isn't any online CPU mapped to
this hctx, so do you want to wait for CPUs for this hctx becoming online?

I was thinking of allocating a tag from that hctx even if it had no
online cpu, the execution is done on an online cpu (hence the call
to blk_mq_alloc_request_hctx).

That can be done, but not following the current blk-mq's rule, because
blk-mq requires to dispatch the request on CPUs mapping to this hctx.

Could you explain a bit why you want to do in this way?

My device exposes nr_hw_queues which is not higher than num_online_cpus
so I want to connect all hctxs with hope that they will be used.

I agree we don't want to connect hctx which doesn't have an online
cpu, that's redundant, but this is not the case here.

Or I may understand you wrong, :-)

In the report we connected 40 hctxs (which was exactly the number of
online cpus), after Yi removed 3 cpus, we tried to connect 37 hctxs.
I'm not sure why some hctxs are left without any online cpus.

That is possible after the following two commits:

4b855ad37194 ("blk-mq: Create hctx for each present CPU)
20e4d8139319 (blk-mq: simplify queue mapping & schedule with each possisble CPU)

And this can be triggered even without putting down any CPUs.

The blk-mq CPU hotplug handler is removed in 4b855ad37194, and we can't
remap queue any more when CPU topo is changed, so the static & fixed mapping
has to be setup from the beginning.

Then if there are less enough online CPUs compared with number of hw queues,
some of hctxes can be mapped with all offline CPUs. For example, if one device
has 4 hw queues, but there are only 2 online CPUs and 6 offline CPUs, at most
2 hw queues are assigned to online CPUs, and the other two are all with offline
CPUs.

That is fine, but the problem that I gave in the example below which has nr_hw_queues == num_online_cpus but because of the mapping, we still
have unmapped hctxs.

Lets say I have 4-cpu system and my device always allocates
num_online_cpus() hctxs.

at first I get:
cpu0 -> hctx0
cpu1 -> hctx1
cpu2 -> hctx2
cpu3 -> hctx3

When cpu1 goes offline I think the new mapping will be:
cpu0 -> hctx0
cpu1 -> hctx0 (from cpu_to_queue_index) // offline
cpu2 -> hctx2
cpu3 -> hctx0 (from cpu_to_queue_index)

This means that now hctx1 is unmapped. I guess we can fix nvmf code
to not connect it. But we end up with less queues than cpus without
any good reason.

I would have optimally want a different mapping that will use all
the queues:
cpu0 -> hctx0
cpu2 -> hctx1
cpu3 -> hctx2
* cpu1 -> hctx1 (doesn't matter, offline)

Something looks broken...

No, it isn't broken.

maybe broken is the wrong phrase, but its suboptimal...

Storage is client/server model, the hw queue should be only active if
there is request coming from client(CPU),

Correct.

and the hw queue becomes inactive if no online CPU is mapped to it.

But when we reset the controller, we call blk_mq_update_nr_hw_queues()
with the current number of nr_hw_queues which never exceeds
num_online_cpus. This in turn, remaps the mq_map which results
in unmapped queues because of the mapping function, not because we
have more hctx than online cpus...

An easy fix, is to allocate num_present_cpus queues, and only connect
the oneline ones, but as you said, we have unused resources this way.

We also have an issue with blk_mq_rdma_map_queues with the only
device that supports it because it doesn't use managed affinity (code
was reverted) and can have irq affinity redirected in case of cpu
offlining...

The goal here I think, should be to allocate just enough queues (not
more than the number online cpus) and spread it 1x1 with online cpus,
and also make sure to allocate completion vectors that align to online
cpus. I just need to figure out how to do that...



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux