On Wed, 4 Sep 2019, Andrea Arcangeli wrote: > > This is an admittedly hacky solution that shouldn't cause anybody to > > regress based on NUMA and the semantics of MADV_HUGEPAGE for the past > > 4 1/2 years for users whose workload does fit within a socket. > > How can you live with the below if you can't live with 5.3-rc6? Here > you allocate remote THP if the local THP allocation fails. > > > page = __alloc_pages_node(hpage_node, > > gfp | __GFP_THISNODE, order); > > + > > + /* > > + * If hugepage allocations are configured to always > > + * synchronous compact or the vma has been madvised > > + * to prefer hugepage backing, retry allowing remote > > + * memory as well. > > + */ > > + if (!page && (gfp & __GFP_DIRECT_RECLAIM)) > > + page = __alloc_pages_node(hpage_node, > > + gfp | __GFP_NORETRY, order); > > + > > You're still going to get THP allocate remote _before_ you have a > chance to allocate 4k local this way. __GFP_NORETRY won't make any > difference when there's THP immediately available in the remote nodes. > This is incorrect: the fallback allocation here is only if the initial allocation with __GFP_THISNODE fails. In that case, we were able to compact memory to make a local hugepage available without incurring excessive swap based on the RFC patch that appears as patch 3 in this series. I very much believe your usecase would benefit from this as well (or at least not cause others to regress). We *want* remote thp if they are immediately available but only after we have tried to allocate locally from the initial allocation and allowed memory compaction fail first. Likely there can be discussion around the fourth patch of this series to get exactly the right policy. We can construct it as necessary for hugetlbfs to not have any change in behavior, that's simple. We could also check per-zone watermarks in mm/huge_memory.c to determine if local memory is low-on-memory and, if so, allow remote allocation. In that case it's certainly better to allocate remotely when we'd be reclaiming locally even for fallback native pages. > I said one good thing about this patch series, that it fixes the swap > storms. But upstream 5.3 fixes the swap storms too and what you sent > is not nearly equivalent to the mempolicy that Michal was willing > to provide you and that we thought you needed to get bigger guarantees > of getting only local 2m or local 4k pages. > I haven't seen such a patch series, is there a link?