On Thu 18-05-17 19:12:58, Sahitya Tummala wrote: > Hi all, > > I am observing "BUG: spinlock lockup suspected" issue in the below path - > > [<ffffff8eca0fb0bc>] spin_bug+0x90 > [<ffffff8eca0fb220>] do_raw_spin_lock+0xfc > [<ffffff8ecafb7798>] _raw_spin_lock+0x28 > [<ffffff8eca1ae884>] list_lru_add+0x28 > [<ffffff8eca1f5dac>] dput+0x1c8 > [<ffffff8eca1eb46c>] path_put+0x20 > [<ffffff8eca1eb73c>] terminate_walk+0x3c > [<ffffff8eca1eee58>] path_lookupat+0x100 > [<ffffff8eca1f00fc>] filename_lookup+0x6c > [<ffffff8eca1f0264>] user_path_at_empty+0x54 > [<ffffff8eca1e066c>] SyS_faccessat+0xd0 > [<ffffff8eca084e30>] el0_svc_naked+0x24 > > This nlru->lock has been acquired by another CPU in this path - > > [<ffffff8eca1f5fd0>] d_lru_shrink_move+0x34 > [<ffffff8eca1f6180>] dentry_lru_isolate_shrink+0x48 > [<ffffff8eca1aeafc>] __list_lru_walk_one.isra.10+0x94 > [<ffffff8eca1aec34>] list_lru_walk_node+0x40 > [<ffffff8eca1f6620>] shrink_dcache_sb+0x60 > [<ffffff8eca1e56a8>] do_remount_sb+0xbc > [<ffffff8eca1e583c>] do_emergency_remount+0xb0 > [<ffffff8eca0ba510>] process_one_work+0x228 > [<ffffff8eca0bb158>] worker_thread+0x2e0 > [<ffffff8eca0c040c>] kthread+0xf4 > [<ffffff8eca084dd0>] ret_from_fork+0x10 > > At the time of crash I see that __list_lru_walk_one() shows number of > entries isolated as 1774475 with nr_items still pending as 130748. On my > system, I see that for dentries of 100000, it takes around 75ms for > __list_lru_walk_one() to complete. So for a total of 1900000 dentries as in > issue scenario, it will take upto 1425ms, which explains why the spin lockup > condition got hit on the other CPU. > > It looks like __list_lru_walk_one() is expected to take more time if there > are more number of dentries present. Any suggestion on how to optimize > the__list_lru_walk_one() path to avoid the above spin lockup condition? Well, I suppose you can have 'cond_resched_lock(&nlru->lock)' check in __list_lru_walk_one() and goto restart if it rescheduled... Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR