Re: zram OOM behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 12, 2012 at 10:32:18PM +0900, Minchan Kim wrote:
> Sorry for the late reply.
> I'm still going on training course until this week so my response would be delayed, too.
> 
> > > > > > <SNIP>
> > > > > > It may be completely unnecessary to reclaim memory if the process that was
> > > > > > throttled and killed just exits quickly. As the fatal signal is pending
> > > > > > it will be able to use the pfmemalloc reserves.
> > > > > > 
> > > > > > > If he can't make forward progress with direct reclaim, he can ends up OOM path but
> > > > > > > out_of_memory checks signal check of current and allow to access reserved memory pool
> > > > > > > for quick exit and return without killing other victim selection.
> > > > > > 
> > > > > > While this is true, what advantage is there to having a killed process
> > > > > > potentially reclaiming memory it does not need to?
> > > > > 
> > > > > Killed process needs a memory for him to be terminated. I think it's not a good idea for him
> > > > > to use reserved memory pool unconditionally although he is throtlled and killed.
> > > > > Because reserved memory pool is very stricted resource for emergency so using reserved memory
> > > > > pool should be last resort after he fail to reclaim.
> > > > > 
> > > > 
> > > > Part of that reclaim can be the process reclaiming its own pages and
> > > > putting them in swap just so it can exit shortly afterwards. If it was
> > > > throttled in this path, it implies that swap-over-NFS is enabled where
> > > 
> > > Could we make sure it's only the case for swap-over-NFS?
> > 
> > The PFMEMALLOC reserves being consumed to the point of throttline is only
> > expected in the case of swap-over-network -- check the pgscan_direct_throttle
> > counter to be sure. So it's already the case that this throttling logic and
> > its signal handling is mostly a swap-over-NFS thing. It is possible that
> > a badly behaving driver using GFP_ATOMIC to allocate long-lived buffers
> > could force a situation where a process gets throttled but I'm not aware
> > of a case where this happens todays.
> 
> I saw some custom drviers in embedded side have used GFP_ATOMIC easily to protect
> avoiding deadlock.

They must be getting a lot of allocation failures in that case.

> Of course, it's not a good behavior but it lives with us.
> Even, we can't fix it because we don't have any source. :(
> 
> > 
> > > I think it can happen if the system has very slow thumb card.
> > > 
> > 
> > How? They shouldn't be stuck in throttling in this case. They should be
> > blocked on IO, congestion wait, dirty throttling etc.
> 
> Some block driver(ex, mmc) uses a thread model with PF_MEMALLOC so I think
> they can be stucked by the throttling logic.
> 

If they are using PF_MEMALLOC + GFP_ATOMIC, there is a strong chance
that they'll actually deadlock their system if there are a storm of
allocations. The drivers is fundamentally broken in a dangerous way.
None of that is fixed by forcing an exiting process to enter direct reclaim.

> > 
> > > > such reclaim in fact might require the pfmemalloc reserves to be used to
> > > > allocate network buffers. It's potentially unnecessary work because the
> > > 
> > > You mean we need pfmemalloc reserve to swap out anon pages by swap-over-NFS?
> > 
> > In very low-memory situations - yes. We can be at the min watermark but
> > still need to allocate a page for a network buffer to swap out the anon page.
> > 
> > > Yes. In this case, you're right. I would be better to use reserve pool for
> > > just exiting instead of swap out over network. But how can you make sure that
> > > we have only anonymous page when we try to reclaim? 
> > > If there are some file-backed pages, we can avoid swapout at that time.
> > > Maybe we need some check.
> > > 
> > 
> > That would be a fairly invasive set of checks for a corner case. if
> > swap-over-nfs + critically low + about to OOM + file pages available then
> > only reclaim files.
> > 
> > It's getting off track as to why we're having this discussion in the first
> > place -- looping due to improper handling of fatal signal pending.
> 
> If some user tune /proc/sys/vm/swappiness, we could have many page cache pages
> when swap-over-NFS happens.

That's a BIG if. swappiness could be anything and it'll depend on the
workload anyway.

> My point is that why do we should use emergency memory pool although we have
> reclaimalble memory?
> 

Because as I have already pointed out, the use of swap-over-nfs itself
creates more allocation pressure if it is used in the reclaim path. The
emergency memory pool is used *anyway* unless there are clean file pages
that can be discarded. But that's a big "if". The safer path is to try
and exit and if *that* fails *then* enter direct reclaim.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]