Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Aleksandr Nogikh <nogikh@xxxxxxxxxx> wrote:
> On Wed, Jul 19, 2023 at 7:11 PM syzbot
> <syzbot+9bbbacfbf1e04d5221f7@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > > On Wed, Jul 19, 2023 at 02:32:51AM -0700, syzbot wrote:
> > >> syzbot has found a reproducer for the following issue on:
> > >>
> > >> HEAD commit:    e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
> > >> git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> > >> console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
> > >> kernel config:  https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
> > >> dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
> > >> compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > >> userspace arch: arm64
> > >> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
> > >> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
> > >>
> > >> Downloadable assets:
> > >> disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
> > >> vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
> > >> kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
> > >
> > > #syz unset btrfs
> >
> > The following labels did not exist: btrfs
> 
> #syz set subsystems: netfilter

I don't see any netfilter involvement here.

The repro just creates a massive amount of team devices.

At the time it hits the LOCKDEP limits on my test vm it has
created ~2k team devices, system load is at +14 because udev
is also busy spawing hotplug scripts for the new devices.

After reboot and suspending the running reproducer after about 1500
devices (before hitting lockdep limits), followed by 'ip link del' for
the team devices gets the lockdep entries down to ~8k (from 40k),
which is in the range that it has on this VM after a fresh boot.

So as far as I can see this workload is just pushing lockdep
past what it can handle with the configured settings and is
not triggering any actual bug.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux