On Mon, 13 Apr 2020, Yang Shi wrote: > On Sun, Apr 12, 2020 at 3:11 AM syzbot > <syzbot+c8a8197c8852f566b9d9@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote: > > > > Hello, > > > > syzbot found the following crash on: > > > > HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN.. > > git tree: upstream > > console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000 > > kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2 > > dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9 > > compiler: gcc (GCC) 9.0.0 20181231 (experimental) > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000 > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000 > > > > The bug was bisected to: > > > > commit 71725ed10c40696dc6bdccf8e225815dcef24dba > > Author: Hugh Dickins <hughd@xxxxxxxxxx> > > Date: Tue Apr 7 03:07:57 2020 +0000 > > > > mm: huge tmpfs: try to split_huge_page() when punching hole > > > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000 > > final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000 > > console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000 > > > > IMPORTANT: if you fix the bug, please add the following tag to the commit: > > Reported-by: syzbot+c8a8197c8852f566b9d9@xxxxxxxxxxxxxxxxxxxxxxxxx > > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole") No, that commit just gave syzkaller an easier way to reach old code. > > > > ===================================================== > > WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected > > 5.6.0-syzkaller #0 Not tainted > > ----------------------------------------------------- > > syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: > > ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341 > > > > and this task is already holding: > > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline] > > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864 > > which would create a new lock dependency: > > (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2} > > > > but this new dependency connects a SOFTIRQ-irq-safe lock: > > (&xa->xa_lock#4){..-.}-{2:2} > > It looks shmem_uncharge() is just called by __split_huge_page() and > collapse_file(). The collapse_file() has acquired xa_lock with irq > disabled before acquiring info->lock, so it is safe. > __split_huge_page() is called with holding xa_lock with irq enabled, > but lru_lock is acquired with irq disabled before acquiring xa_lock. > > So, it is unnecessary to acquire info->lock with irq disabled in > shmem_uncharge(). Can syzbot try the below patch? But I disagree with the patch below. You're right that IRQ-disabling here is unnecessary, given its two callers; but I'm not sure that we want it to look different from shmem_charge() and all other info->lock takers; and, more importantly, I don't see how removing the redundant IRQ-saving below could make it any less liable to deadlock. The crucial observation comes lower down > > to a SOFTIRQ-irq-unsafe lock: > > (shmlock_user_lock){+.+.}-{2:2} and there's another syzbot report that's come out on shmlock_user_lock, "possible deadlock in user_shm_lock". I believe all that's needed to fix both reports is not to use info->lock in shmem_lock() - I see now that we saw lockdep reports of this kind internally, a long time ago, and fixed them in that way. (I haven't composed the patch and references yet, and not decided if I'll add it here or there or separately. I'll put it together now.) Hugh > > diff --git a/mm/shmem.c b/mm/shmem.c > index d722eb8..100117b 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -334,15 +334,14 @@ bool shmem_charge(struct inode *inode, long pages) > void shmem_uncharge(struct inode *inode, long pages) > { > struct shmem_inode_info *info = SHMEM_I(inode); > - unsigned long flags; > > /* nrpages adjustment done by __delete_from_page_cache() or caller */ > > - spin_lock_irqsave(&info->lock, flags); > + spin_lock(&info->lock); > info->alloced -= pages; > inode->i_blocks -= pages * BLOCKS_PER_PAGE; > shmem_recalc_inode(inode); > - spin_unlock_irqrestore(&info->lock, flags); > + spin_unlock(&info->lock); > > shmem_inode_unacct_blocks(inode, pages); > }