Re: [PATCH 2/2] nvme-multipath: don't block on blk_queue_enter of the underlying device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2021/3/23 11:23, Sagi Grimberg wrote:


On 3/22/21 7:57 PM, Sagi Grimberg wrote:

When we reset/teardown a controller, we must freeze and quiesce the
namespaces request queues to make sure that we safely stop inflight I/O
submissions. Freeze is mandatory because if our hctx map changed between
reconnects, blk_mq_update_nr_hw_queues will immediately attempt to freeze
the queue, and if it still has pending submissions (that are still
quiesced) it will hang.

However, by freezing the namespaces request queues, and only unfreezing
them when we successfully reconnect, inflight submissions that are
running concurrently can now block grabbing the nshead srcu until either
we successfully reconnect or ctrl_loss_tmo expired (or the user
explicitly disconnected).

This caused a deadlock when a different controller (different path on the
same subsystem) became live (i.e. optimized/non-optimized). This is
because nvme_mpath_set_live needs to synchronize the nshead srcu before
requeueing I/O in order to make sure that current_path is visible to
future (re-)submisions. However the srcu lock is taken by a blocked
submission on a frozen request queue, and we have a deadlock.

In order to fix this use the blk_mq_submit_bio_direct API to submit the
bio to the low-level driver, which does not block on the queue free
but instead allows nvme-multipath to pick another path or queue up the
bio.

Almost...

This still has the same issue but instead of blocking on
blk_queue_enter() it is blocked on blk_mq_get_tag():
--
  __schedule+0x22b/0x6e0
  schedule+0x46/0xb0
  io_schedule+0x42/0x70
  blk_mq_get_tag+0x11d/0x270
  ? blk_bio_segment_split+0x235/0x2a0
  ? finish_wait+0x80/0x80
  __blk_mq_alloc_request+0x65/0xe0
  blk_mq_submit_bio+0x144/0x500
  blk_mq_submit_bio_direct+0x78/0xa0
  nvme_ns_head_submit_bio+0xc3/0x2f0 [nvme_core]
  __submit_bio_noacct+0xcf/0x2e0
  __blkdev_direct_IO+0x413/0x440
  ? __io_complete_rw.constprop.0+0x150/0x150
  generic_file_read_iter+0x92/0x160
  io_iter_do_read+0x1a/0x40
  io_read+0xc5/0x350
  ? common_interrupt+0x14/0xa0
  ? update_load_avg+0x7a/0x5e0
  io_issue_sqe+0xa28/0x1020
  ? lock_timer_base+0x61/0x80
  io_wq_submit_work+0xaa/0x120
  io_worker_handle_work+0x121/0x330
  io_wqe_worker+0xb6/0x190
  ? io_worker_handle_work+0x330/0x330
  ret_from_fork+0x22/0x30
--

--
  ? usleep_range+0x80/0x80
  __schedule+0x22b/0x6e0
  ? usleep_range+0x80/0x80
  schedule+0x46/0xb0
  schedule_timeout+0xff/0x140
  ? del_timer_sync+0x67/0xb0
  ? __prepare_to_swait+0x4b/0x70
  __wait_for_common+0xb3/0x160
  __synchronize_srcu.part.0+0x75/0xe0
  ? __bpf_trace_rcu_utilization+0x10/0x10
  nvme_mpath_set_live+0x61/0x130 [nvme_core]
  nvme_update_ana_state+0xd7/0x100 [nvme_core]
  nvme_parse_ana_log+0xa5/0x160 [nvme_core]
  ? nvme_mpath_set_live+0x130/0x130 [nvme_core]
  nvme_read_ana_log+0x7b/0xe0 [nvme_core]
  process_one_work+0x1e6/0x380
  worker_thread+0x49/0x300
--



If I were to always start the queues in nvme_tcp_teardown_ctrl
right after I cancel the tagset inflights like:
--
@@ -1934,8 +1934,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
         nvme_sync_io_queues(ctrl);
         nvme_tcp_stop_io_queues(ctrl);
         nvme_cancel_tagset(ctrl);
-       if (remove)
-               nvme_start_queues(ctrl);
+       nvme_start_queues(ctrl);
         nvme_tcp_destroy_io_queues(ctrl, remove);
--

then a simple reset during traffic bricks the host on infinite loop
because in the setup sequence we freeze the queue in
nvme_update_ns_info, so the queue is frozen but we still have an
available path (because the controller is back to live!) so nvme-mpath
keeps calling blk_mq_submit_bio_direct and fails, and
nvme_update_ns_info cannot properly freeze the queue..
-> deadlock.

So this is obviously incorrect.

Also, if we make nvme-mpath submit a REQ_NOWAIT we basically
will fail as soon as we run out of tags, even in the normal path...

So I'm not exactly sure what we should do to fix this...

It's still not too late to go with my original approach... ;)
I check it again. I still think the below patch can avoid the bug.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5a6c35f9af416114588298aa7a90b15bbed15a41
The process:
1.nvme_ns_head_submit_bio call srcu_read_lock(&head->srcu).
2.nvme_ns_head_submit_bio will add the bio to current->bio_list instead of waiting for the frozen queue.
3.nvme_ns_head_submit_bio call srcu_read_unlock(&head->srcu, srcu_idx).
So nvme_ns_head_submit_bio do not hold head->srcu long when the queue is frozen, can avoid deadlock.

Sagi, suggest trying this patch.

.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux