Re: [PATCH REBASED] mm: Throttle allocators when failing reclaim over memory.high

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Michal,

Just to come back to your last e-mail about how this interacts with OOM.

Michal Hocko writes:
I am not really opposed to the throttling in the absence of a reclaimable
memory. We do that for the regular allocation paths already
(should_reclaim_retry). A swapless system with anon memory is very likely to
oom too quickly and this sounds like a real problem. But I do not think that
we should throttle the allocation to freeze it completely. We should
eventually OOM. And that was my question about essentially. How much we
can/should throttle to give a high limit events consumer enough time to
intervene. I am sorry to still not have time to study the patch more closely
but this should be explained in the changelog. Are we talking about
seconds/minutes or simply freeze each allocator to death?

Per-allocation, the maximum is 2 seconds (MEMCG_MAX_HIGH_DELAY_JIFFIES), so we don't freeze things to death -- they can recover if they are amenable to it. The idea here is that primarily you handle it, just like memory.oom_control in v1 (as mentioned in the commit message, or as a last resort, the kernel will still OOM if our userspace daemon has kicked the bucket or is otherwise ineffective.

If you're setting memory.high and memory.max together, then setting memory.high always has to come with a.) tolerance of heavy throttling by your application, and b.) userspace intervention in the case of high memory pressure resulting. This patch doesn't really change those semantics.



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux