Re: [PATCH] mm/page_alloc: try oom if reclaim is unable to make forward progress

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 19-03-21 17:29:01, Aaron Tomlin wrote:
> Hi Michal,
> 
> On Thu 2021-03-18 17:16 +0100, Michal Hocko wrote:
> > On Mon 15-03-21 16:58:37, Aaron Tomlin wrote:
> > > In the situation where direct reclaim is required to make progress for
> > > compaction but no_progress_loops is already over the limit of
> > > MAX_RECLAIM_RETRIES consider invoking the oom killer.
> 
> Firstly, thank you for your response.
> 
> > What is the problem you are trying to fix?
> 
> If I understand correctly, in the case of a "costly" order allocation
> request that is permitted to repeatedly retry, it is possible to exceed the
> maximum reclaim retry threshold as long as "some" progress is being made
> even at the highest compaction priority.

Costly orders already do have heuristics for the retry in place. Could
you be more specific what kind of problem you see with those?

> Furthermore, if the allocator has a fatal signal pending, this is not
> considered.

Fatal signals pending are usually not a strong reason to cut retries
count or fail allocations.

> In my opinion, it might be better to just give up straight away or try and
> use the OOM killer only in the non-costly order allocation scenario to
> assit reclaim. Looking at __alloc_pages_may_oom() the current logic is to
> entirely skip the OOM killer for a costly order request, which makes sense.

Well, opinions might differ of course. The main question is whether
there are workloads which are unhappy about the existing behavior.

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux