On Tue 05-11-19 17:01:00, David Rientjes wrote: > On Tue, 5 Nov 2019, Michal Hocko wrote: > > > > > Thanks, I'll queue this for some more testing. At some point we should > > > > decide on a suitable set of Fixes: tags and a backporting strategy, if any? > > > > > > > > > > I'd strongly suggest that Andrea test this patch out on his workload on > > > hosts where all nodes are low on memory because based on my understanding > > > of his reported issue this would result in swap storms reemerging but > > > worse this time because they wouldn't be constrained only locally. (This > > > patch causes us to no longer circumvent excessive reclaim when using > > > MADV_HUGEPAGE.) > > > > Could you be more specific on why this would be the case? My testing is > > doesn't show any such signs and I am effectivelly testing memory low > > situation. The amount of reclaimed memory matches the amount of > > requested memory. > > > > The follow-up allocation in alloc_pages_vma() would no longer use > __GFP_NORETRY and there is no special handling to avoid swap storms in the > page allocator anymore as a result of this patch. Yes there is no __GFP_NORETRY in the fallback path because the control over how hard to retry is controlled by alloc_hugepage_direct_gfpmask depending on the defrag mode and madvise mode. > I don't see any > indication that this allocation would behave any different than the code > that Andrea experienced swap storms with, but now worse if remote memory > is in the same state local memory is when he's using __GFP_THISNODE. The primary reason for the extensive swapping was exactly the __GFP_THISNODE in conjunction with an unbounded direct reclaim AFAIR. The whole point of the Vlastimil's patch is to have an optimistic local node allocation first and the full gfp context one in the fallback path. If our full gfp context doesn't really work well then we can revisit that of course but that should happen at alloc_hugepage_direct_gfpmask level. -- Michal Hocko SUSE Labs