Re: DM brokeness with NOWAIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 15 2023 at 12:14P -0400,
Jens Axboe <axboe@xxxxxxxxx> wrote:

> On 9/15/23 10:04 AM, Jens Axboe wrote:
> > Hi,
> > 
> > Threw some db traffic into my testing mix, and that ended in tears
> > very quickly:
> > 
> > CPU: 7 PID: 49609 Comm: ringbuf-read.t Tainted: G        W          6.6.0-rc1-g39956d2dcd81 #129
> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > Call Trace:
> >  <TASK>
> >  dump_stack_lvl+0x11d/0x1b0
> >  __might_resched+0x3c3/0x5e0
> >  ? preempt_count_sub+0x150/0x150
> >  mempool_alloc+0x1e2/0x390
> >  ? sanity_check_pinned_pages+0x23/0x1010
> >  ? mempool_resize+0x7d0/0x7d0
> >  bio_alloc_bioset+0x417/0x8c0
> >  ? bvec_alloc+0x200/0x200
> >  ? __gup_device_huge+0x900/0x900
> >  bio_alloc_clone+0x53/0x100
> >  dm_submit_bio+0x27f/0x1a20
> >  ? lock_release+0x4b7/0x670
> >  ? pin_user_pages_fast+0xb6/0xf0
> >  ? blk_try_enter_queue+0x1a0/0x4d0
> >  ? dm_dax_direct_access+0x260/0x260
> >  ? rcu_is_watching+0x12/0xb0
> >  ? blk_try_enter_queue+0x1cc/0x4d0
> >  __submit_bio+0x239/0x310
> >  ? __bio_queue_enter+0x700/0x700
> >  ? kvm_clock_get_cycles+0x40/0x60
> >  ? ktime_get+0x285/0x470
> >  submit_bio_noacct_nocheck+0x4d9/0xb80
> >  ? should_fail_request+0x80/0x80
> >  ? preempt_count_sub+0x150/0x150
> >  ? folio_flags+0x6c/0x1e0
> >  submit_bio_noacct+0x53e/0x1b30
> >  blkdev_direct_IO.part.0+0x833/0x1810
> >  ? rcu_is_watching+0x12/0xb0
> >  ? lock_release+0x4b7/0x670
> >  ? blkdev_read_iter+0x40d/0x530
> >  ? reacquire_held_locks+0x4e0/0x4e0
> >  ? __blkdev_direct_IO_simple+0x780/0x780
> >  ? rcu_is_watching+0x12/0xb0
> >  ? __mark_inode_dirty+0x297/0xd50
> >  ? preempt_count_add+0x72/0x140
> >  blkdev_read_iter+0x2a4/0x530
> >  ? blkdev_write_iter+0xc40/0xc40
> >  io_read+0x369/0x1490
> >  ? rcu_is_watching+0x12/0xb0
> >  ? io_writev_prep_async+0x260/0x260
> >  ? __fget_files+0x279/0x410
> >  ? rcu_is_watching+0x12/0xb0
> >  io_issue_sqe+0x18a/0xd90
> >  io_submit_sqes+0x970/0x1ed0
> >  __do_sys_io_uring_enter+0x14d4/0x2650
> >  ? io_submit_sqes+0x1ed0/0x1ed0
> >  ? rcu_is_watching+0x12/0xb0
> >  ? __do_sys_io_uring_register+0x3f6/0x2190
> >  ? io_req_caches_free+0x500/0x500
> >  ? ksys_mmap_pgoff+0x85/0x5b0
> >  ? rcu_is_watching+0x12/0xb0
> >  ? trace_irq_enable.constprop.0+0xd0/0x100
> >  do_syscall_64+0x39/0xb0
> >  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > 
> > which seems to demonstrate a misunderstanding on what REQ_NOWAIT is
> > about. In particulary, it seems to assume you can then submit with
> > atomic context? DM does an rcu_read_lock() and happily proceeds to
> > attempt to submit IO under RCU being disabled.
> 
> Did a quick check to see where this came from, and it got added with:
> 
> commit 563a225c9fd207326c2a2af9d59b4097cb31ce70
> Author: Mike Snitzer <snitzer@xxxxxxxxxx>
> Date:   Sat Mar 26 21:08:36 2022 -0400
> 
>     dm: introduce dm_{get,put}_live_table_bio called from dm_submit_bio
> 
> which conspiciously doesn't include any numbers on why this is necessary
> or a good thing, and notably probably wasn't tested? This landed in 5.19
> fwiw.

Don't recall what I was thinking, and I clearly didn't properly test
either... should've consulted Mikulas.  Sorry for the trouble.

Would you like to send a formal patch with your Signed-off-by and I'll
mark it for stable@ and get it to Linus?

Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux