On Thu, 29 Jun 2017 09:09:35 +0530 Sahitya Tummala <stummala@xxxxxxxxxxxxxx> wrote: > __list_lru_walk_one() acquires nlru spin lock (nlru->lock) for > longer duration if there are more number of items in the lru list. > As per the current code, it can hold the spin lock for upto maximum > UINT_MAX entries at a time. So if there are more number of items in > the lru list, then "BUG: spinlock lockup suspected" is observed in > the below path - > > ... > > Fix this lockup by reducing the number of entries to be shrinked > from the lru list to 1024 at once. Also, add cond_resched() before > processing the lru list again. > > ... > > --- a/fs/dcache.c > +++ b/fs/dcache.c > @@ -1133,11 +1133,12 @@ void shrink_dcache_sb(struct super_block *sb) > LIST_HEAD(dispose); > > freed = list_lru_walk(&sb->s_dentry_lru, > - dentry_lru_isolate_shrink, &dispose, UINT_MAX); > + dentry_lru_isolate_shrink, &dispose, 1024); > > this_cpu_sub(nr_dentry_unused, freed); > shrink_dentry_list(&dispose); > - } while (freed > 0); > + cond_resched(); > + } while (list_lru_count(&sb->s_dentry_lru) > 0); > } > EXPORT_SYMBOL(shrink_dcache_sb); I'll add a cc:stable to this one - a large dentry list is a relatively common thing. I'm assumng that [1/2] does not need to be backported, OK?