Re: [PATCH] mm, page_alloc: actually ignore mempolicies for high priority allocations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 20-08-18 19:52:59, Tetsuo Handa wrote:
> On 2018/08/20 19:41, Michal Hocko wrote:
> > On Sat 18-08-18 22:02:14, Tetsuo Handa wrote:
> >> On 2018/08/16 19:03, Michal Hocko wrote:
> >>> The code is quite subtle and we have a bad history of copying stuff
> >>> without rethinking whether the code still is needed. Which is sad and a
> >>> clear sign that the code is too complex. I cannot say this change
> >>> doesn't have any subtle side effects but it makes the intention clear at
> >>> least so I _think_ it is good to go. If we find some unintended side
> >>> effects we should simply rethink the whole reset zonelist thing.
> >>
> >> Does this change affect
> >>
> >>         /*
> >>          * This is not a __GFP_THISNODE allocation, so a truncated nodemask in
> >>          * the page allocator means a mempolicy is in effect.  Cpuset policy
> >>          * is enforced in get_page_from_freelist().
> >>          */
> >>         if (oc->nodemask &&
> >>             !nodes_subset(node_states[N_MEMORY], *oc->nodemask)) {
> >>                 oc->totalpages = total_swap_pages;
> >>                 for_each_node_mask(nid, *oc->nodemask)
> >>                         oc->totalpages += node_spanned_pages(nid);
> >>                 return CONSTRAINT_MEMORY_POLICY;
> >>         }
> >>
> >> in constrained_alloc() called from
> >>
> >>         /*
> >>          * Check if there were limitations on the allocation (only relevant for
> >>          * NUMA and memcg) that may require different handling.
> >>          */
> >>         constraint = constrained_alloc(oc);
> >>         if (constraint != CONSTRAINT_MEMORY_POLICY)
> >>                 oc->nodemask = NULL;
> >>
> >> in out_of_memory() ?
> > 
> > No practical difference AFAICS. We are losing the nodemask for oom
> > victims but their mere existance should make oom decisions void
> > and so the constrain shouldn't really matter.
> > 
> 
> If my guess that whether oc->nodemask is NULL affects whether oom_unkillable_task()
> (effectively has_intersects_mems_allowed()) returns true is correct, I worried that
> OOM victims select next OOM victim from wider targets if MMF_OOM_SKIP was already set
> on the OOM victim's mm. That might be an unexpected behavior...

What I've tried to say is that the oom victim should be present also
in wider oom domain. There are potential races (e.g. the victim having
MMF_OOM_SKIP set) but I am not really sure we should be losing sleep
over those. As I've said in an earlier email. If we have an unexpected
behavior we should deal with it.

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux