On Mon 21-02-22 11:17:49, cgel.zte@xxxxxxxxx wrote: > From: Guo Ziliang <guo.ziliang@xxxxxxxxxx> > > In our testing, a deadloop task was found. Through sysrq printing, same > stack was found every time, as follows: > __swap_duplicate+0x58/0x1a0 > swapcache_prepare+0x24/0x30 > __read_swap_cache_async+0xac/0x220 > read_swap_cache_async+0x58/0xa0 > swapin_readahead+0x24c/0x628 > do_swap_page+0x374/0x8a0 > __handle_mm_fault+0x598/0xd60 > handle_mm_fault+0x114/0x200 > do_page_fault+0x148/0x4d0 > do_translation_fault+0xb0/0xd4 > do_mem_abort+0x50/0xb0 > > The reason for the deadloop is that swapcache_prepare() always returns > EEXIST, indicating that SWAP_HAS_CACHE has not been cleared, so that > it cannot jump out of the loop. We suspect that the task that clears > the SWAP_HAS_CACHE flag never gets a chance to run. We try to lower > the priority of the task stuck in a deadloop so that the task that > clears the SWAP_HAS_CACHE flag will run. The results show that the > system returns to normal after the priority is lowered. > > In our testing, multiple real-time tasks are bound to the same core, > and the task in the deadloop is the highest priority task of the > core, so the deadloop task cannot be preempted. > > Although cond_resched() is used by __read_swap_cache_async, it is an > empty function in the preemptive system and cannot achieve the purpose > of releasing the CPU. A high-priority task cannot release the CPU > unless preempted by a higher-priority task. But when this task > is already the highest priority task on this core, other tasks > will not be able to be scheduled. So we think we should replace > cond_resched() with schedule_timeout_uninterruptible(1), > schedule_timeout_interruptible will call set_current_state > first to set the task state, so the task will be removed > from the running queue, so as to achieve the purpose of > giving up the CPU and prevent it from running in kernel > mode for too long. I am sorry but I really do not see how this case is any different from any other kernel code path being hogged by a RT task. We surely shouldn't put sleeps into all random paths which are doing cond_resched at the moment. > Reported-by: Zeal Robot <zealci@xxxxxxxxxx> > Reviewed-by: Ran Xiaokai <ran.xiaokai@xxxxxxxxxx> > Reviewed-by: Jiang Xuexin <jiang.xuexin@xxxxxxxxxx> > Reviewed-by: Yang Yang <yang.yang29@xxxxxxxxxx> > Signed-off-by: Guo Ziliang <guo.ziliang@xxxxxxxxxx> > --- > mm/swap_state.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 8d4104242100..ee67164531c0 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -478,7 +478,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, > * __read_swap_cache_async(), which has set SWAP_HAS_CACHE > * in swap_map, but not yet added its page to swap cache. > */ > - cond_resched(); > + schedule_timeout_uninterruptible(1); > } > > /* > -- > 2.15.2 > -- Michal Hocko SUSE Labs