We’ve noticed an issue that frequent open,close,open,close... of /dev/ptmx eventually causes soft lockup. Here is a summary of what happened. Upon user opening /dev/ptmx, devpts_pty_new calls d_alloc_name and d_add, the new dentry is inserted into dcache_hashtable. Later when closing, devpts_pty_kill calls d_delete and dput, but per the conditionals in them, neither calls __d_drop: - d_delete sees “dentry->d_lockref.count == 1” and takes the “dentry_unlink_inode” branch, avoiding __d_drop. - dput takes the “likely(fast_dput(dentry))” branch and skips dentry_kill() altogether (which would have called __d_drop). The problem is that each devpts_pty_new creates a new dentry, which as a result stays in the hashtable forever. Reuse cannot happen because the devpts always uses d_add instead of d_alloc_parallel. This can be a problem if a user has a loop to open/close /dev/ptmx, because a specific link in the hash is hit, making it ever grow longer. In a system that has an unfortunate application that spawns a lot of short living processes and allocates ttys for each, such stale dentries accumulate, which are hashed the same, making a million elem long hash chain. In this case, d_alloc_parallel will more often than not need a few retries due to seq change because one __d_lookup_rcu call can take a second or so. Reproducer: while echo >/dev/ptmx; :; done Overtime, one of your unlucky path lookups (that hits the same hash bucket as affected by above) will slow down significantly. Should devpts clean up differently? Or should d_delete be fixed? Thanks, Fam