On Tue 17-01-17 23:16:06, Vlastimil Babka wrote: > This is my attempt to fix the recent report based on LTP cpuset stress test [1]. > Patches are based on 4.9 as that was the initial reported version, but later > it was reported that this problem exists since 4.7. We will probably want to > go to stable with this, as triggering OOMs is not nice. That's why the patches > try to be not too intrusive. > > Longer-term we might try to think how to fix the cpuset mess in a better and > less error prone way. I was for example very surprised to learn, that cpuset > updates change not only task->mems_allowed, but also nodemask of mempolicies. > Until now I expected the parameter to alloc_pages_nodemask() to be stable. > I wonder why do we then treat cpusets specially in get_page_from_freelist() > and distinguish HARDWALL etc, when there's unconditional intersection between > mempolicy and cpuset. I would expect the nodemask adjustment for saving > overhead in g_p_f(), but that clearly doesn't happen in the current form. > So we have both crazy complexity and overhead, AFAICS. Absolutely agreed! This is a mess which should be fixed and nodemask should be stable for each allocation attempt. Trying to catch up with concurrent changes is just insane and makes the code more complicated. > [1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@xxxxxxxxxxxxxx > > Vlastimil Babka (4): > mm, page_alloc: fix check for NULL preferred_zone > mm, page_alloc: fix fast-path race with cpuset update or removal > mm, page_alloc: move cpuset seqcount checking to slowpath > mm, page_alloc: fix premature OOM when racing with cpuset mems update > > mm/page_alloc.c | 58 ++++++++++++++++++++++++++++++++++++--------------------- > 1 file changed, 37 insertions(+), 21 deletions(-) > > -- > 2.11.0 -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>