Re: move more work to disk_release v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/27/22 09:21, Christoph Hellwig wrote:
Hi all,

this series resurrects and forward ports ports larger parts of the
"block: don't drain file system I/O on del_gendisk" series from Ming,
but does not remove the draining in del_gendisk, but instead the one
in the sd driver, which always was a bit ad-hoc.  As part of that sd
and sr are switched to use the new ->free_disk method to avoid having
to clear disk->private_data and the way to lookup the SCSI ULP is
cleaned up as well.

Git branch:

     git://git.infradead.org/users/hch/block.git freeze-5.18

Hi Christoph,

Thanks for the quick respin. If I run blktests as follows:

$ use_siw=1 ./check -q

then the first report I hit with this branch is a deadlock report in
nvmet_rdma_free_queue(). That issue has already been reported - see also
https://lore.kernel.org/linux-nvme/CAHj4cs93BfTRgWF6PbuZcfq6AARHgYC2g=RQ-7Jgcf1-6h+2SQ@xxxxxxxxxxxxxx/

The second issue I run into with this branch is as follows
(also for nvmeof-mp/002):

==================================================================
BUG: KASAN: null-ptr-deref in __blk_account_io_start+0x28/0xa0
Read of size 8 at addr 0000000000000008 by task kworker/0:1H/159

CPU: 0 PID: 159 Comm: kworker/0:1H Not tainted 5.17.0-rc2-dbg+ #9
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b-rebuilt.opensuse.org 04/01/2014
Workqueue: kblockd blk_mq_requeue_work
Call Trace:
 <TASK>
 show_stack+0x52/0x58
 ? __blk_account_io_start+0x28/0xa0
 dump_stack_lvl+0x5b/0x82
 kasan_report.cold+0x64/0xdb
 ? __blk_account_io_start+0x28/0xa0
 __asan_load8+0x69/0x90
 __blk_account_io_start+0x28/0xa0
 blk_insert_cloned_request+0x107/0x3b0
 map_request+0x260/0x3c0 [dm_mod]
 ? dm_requeue_original_request+0x1a0/0x1a0 [dm_mod]
 ? blk_add_timer+0xc3/0x110
 dm_mq_queue_rq+0x207/0x400 [dm_mod]
 ? kasan_set_track+0x25/0x30
 ? kasan_set_free_info+0x24/0x40
 ? map_request+0x3c0/0x3c0 [dm_mod]
 ? nvmet_rdma_release_rsp+0xb3/0x3f0 [nvmet_rdma]
 ? nvmet_rdma_send_done+0x4a/0x70 [nvmet_rdma]
 ? __ib_process_cq+0x11b/0x3c0 [ib_core]
 ? ib_cq_poll_work+0x37/0xb0 [ib_core]
 ? process_one_work+0x594/0xad0
 ? worker_thread+0x2de/0x6b0
 ? kthread+0x15f/0x190
 ? ret_from_fork+0x1f/0x30
 blk_mq_dispatch_rq_list+0x344/0xc00
 ? blk_mq_mark_tag_wait+0x470/0x470
 ? rcu_read_lock_sched_held+0x16/0x80
 __blk_mq_sched_dispatch_requests+0x19b/0x280
 ? blk_mq_do_dispatch_ctx+0x3f0/0x3f0
 ? rcu_read_lock_sched_held+0x16/0x80
 blk_mq_sched_dispatch_requests+0x8a/0xc0
 __blk_mq_run_hw_queue+0x99/0x220
 __blk_mq_delay_run_hw_queue+0x372/0x3a0
 ? blk_mq_run_hw_queue+0xd7/0x2b0
 ? rcu_read_lock_sched_held+0x16/0x80
 blk_mq_run_hw_queue+0x1d6/0x2b0
 blk_mq_run_hw_queues+0xa0/0x1e0
 blk_mq_requeue_work+0x2e4/0x330
 ? blk_mq_try_issue_directly+0x60/0x60
 ? lock_acquire+0x76/0x1a0
 process_one_work+0x594/0xad0
 ? pwq_dec_nr_in_flight+0x120/0x120
 ? do_raw_spin_lock+0x115/0x1b0
 ? lock_acquire+0x76/0x1a0
 worker_thread+0x2de/0x6b0
 ? trace_hardirqs_on+0x2b/0x120
 ? process_one_work+0xad0/0xad0
 kthread+0x15f/0x190
 ? kthread_complete_and_exit+0x30/0x30
 ret_from_fork+0x1f/0x30
 </TASK>
==================================================================



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux