[Please do not top post - thank you] [CC Hugh - the original patch was http://lkml.kernel.org/r/2018072514375722198958@xxxxxxxxxxxx] On Wed 25-07-18 15:57:55, zhaowuyun@xxxxxxxxxxxx wrote: > That is a BUG we found in mm/vmscan.c at KERNEL VERSION 4.9.82 The code is quite similar in the current tree as well. > Sumary is TASK A (normal priority) doing __remove_mapping page preempted by TASK B (RT priority) doing __read_swap_cache_async, > the TASK A preempted before swapcache_free, left SWAP_HAS_CACHE flag in the swap cache, > the TASK B which doing __read_swap_cache_async, will not success at swapcache_prepare(entry) because the swap cache was exist, then it will loop forever because it is a RT thread... > the spin lock unlocked before swapcache_free, so disable preemption until swapcache_free executed ... OK, I see your point now. I have missed the lock is dropped before swapcache_free. How can preemption disabling prevent this race to happen while the code is preempted by an IRQ? -- Michal Hocko SUSE Labs