Re: [PATCH linux-next] mm: swap: get rid of deadloop in swapin readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 28-02-22 07:33:15, Andrew Morton wrote:
> On Mon, 28 Feb 2022 08:57:49 +0100 Michal Hocko <mhocko@xxxxxxxx> wrote:
> 
> > On Mon 21-02-22 11:17:49, cgel.zte@xxxxxxxxx wrote:
> > > From: Guo Ziliang <guo.ziliang@xxxxxxxxxx>
> > > 
> > > In our testing, a deadloop task was found. Through sysrq printing, same 
> > > stack was found every time, as follows:
> > > __swap_duplicate+0x58/0x1a0
> > > swapcache_prepare+0x24/0x30
> > > __read_swap_cache_async+0xac/0x220
> > > read_swap_cache_async+0x58/0xa0
> > > swapin_readahead+0x24c/0x628
> > > do_swap_page+0x374/0x8a0
> > > __handle_mm_fault+0x598/0xd60
> > > handle_mm_fault+0x114/0x200
> > > do_page_fault+0x148/0x4d0
> > > do_translation_fault+0xb0/0xd4
> > > do_mem_abort+0x50/0xb0
> > > 
> > > The reason for the deadloop is that swapcache_prepare() always returns
> > > EEXIST, indicating that SWAP_HAS_CACHE has not been cleared, so that
> > > it cannot jump out of the loop. We suspect that the task that clears
> > > the SWAP_HAS_CACHE flag never gets a chance to run. We try to lower
> > > the priority of the task stuck in a deadloop so that the task that
> > > clears the SWAP_HAS_CACHE flag will run. The results show that the
> > > system returns to normal after the priority is lowered.
> > > 
> > > In our testing, multiple real-time tasks are bound to the same core,
> > > and the task in the deadloop is the highest priority task of the
> > > core, so the deadloop task cannot be preempted.
> > > 
> > > Although cond_resched() is used by __read_swap_cache_async, it is an
> > > empty function in the preemptive system and cannot achieve the purpose
> > > of releasing the CPU. A high-priority task cannot release the CPU
> > > unless preempted by a higher-priority task. But when this task
> > > is already the highest priority task on this core, other tasks
> > > will not be able to be scheduled. So we think we should replace
> > > cond_resched() with schedule_timeout_uninterruptible(1),
> > > schedule_timeout_interruptible will call set_current_state
> > > first to set the task state, so the task will be removed
> > > from the running queue, so as to achieve the purpose of
> > > giving up the CPU and prevent it from running in kernel
> > > mode for too long.
> > 
> > I am sorry but I really do not see how this case is any different from
> > any other kernel code path being hogged by a RT task. We surely
> > shouldn't put sleeps into all random paths which are doing cond_resched
> > at the moment. 
> 
> But this cond_resched() is different from most.  This one is attempting
> to yield the CPU so this task can make progress.  And cond_resched()
> simply isn't an appropriate way of doing this because under this fairly
> common situation, it's a no-op.

I might be really missing something but I really do not see how is this
any different from the page allocator path which only does cond_resched
as well (well, except for throttling but that might just not trigger).
Or other paths which just do cond_resched while waiting for a progress
somewhere else.

Not that I like this situation but !PREEMPT kernel with RT priority
tasks is rather limited and full of potential priblems IMHO.
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux