The patch titled Subject: fs/drop_caches.c: avoid softlockups in drop_pagecache_sb() has been removed from the -mm tree. Its filename was vfs-avoid-softlockups-in-drop_pagecache_sb.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: fs/drop_caches.c: avoid softlockups in drop_pagecache_sb() When superblock has lots of inodes without any pagecache (like is the case for /proc), drop_pagecache_sb() will iterate through all of them without dropping sb->s_inode_list_lock which can lead to softlockups (one of our customers hit this). Fix the problem by going to the slow path and doing cond_resched() in case the process needs rescheduling. Link: http://lkml.kernel.org/r/20190114085343.15011-1-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/drop_caches.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) --- a/fs/drop_caches.c~vfs-avoid-softlockups-in-drop_pagecache_sb +++ a/fs/drop_caches.c @@ -21,8 +21,13 @@ static void drop_pagecache_sb(struct sup spin_lock(&sb->s_inode_list_lock); list_for_each_entry(inode, &sb->s_inodes, i_sb_list) { spin_lock(&inode->i_lock); + /* + * We must skip inodes in unusual state. We may also skip + * inodes without pages but we deliberately won't in case + * we need to reschedule to avoid softlockups. + */ if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) || - (inode->i_mapping->nrpages == 0)) { + (inode->i_mapping->nrpages == 0 && !need_resched())) { spin_unlock(&inode->i_lock); continue; } @@ -30,6 +35,7 @@ static void drop_pagecache_sb(struct sup spin_unlock(&inode->i_lock); spin_unlock(&sb->s_inode_list_lock); + cond_resched(); invalidate_mapping_pages(inode->i_mapping, 0, -1); iput(toput_inode); toput_inode = inode; _ Patches currently in -mm which might be from jack@xxxxxxx are