Re: "BUG: MAX_LOCKDEP_ENTRIES too low" with 6979 "&type->s_umount_key"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On May 16, 2020, at 9:16 PM, Waiman Long <longman@xxxxxxxxxx> wrote:
> 
> The lock_list table entries are for tracking a lock's forward and backward dependencies. The lockdep_chains isn't the right lockdep file to look at. Instead, check the lockdep files for entries with the maximum BD (backward dependency) + FD (forward dependency). That will give you a better view of which locks are consuming most of the lock_list entries. Also take a look at lockdep_stats for an overall view of how much various table entries are being consumed.

Thanks for the hint. It seems something in vfs is the culprit because every single one of those triggering from path_openat() (vfs_open()) or vfs_get_tree()

When the system after boot, lock_list entries is around 10000. After running LTP syscalls and mm tests, the number is around 20000. Then, it will go all the way over the max (32700) while running LTP fs tests. Most of the time from a test that would read every single file in sysfs.

I’ll decode the lockdep file to see if there is any more clues.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux