On Thu 03-11-22 14:36:40, Yang Shi wrote: [...] > So use nodemask to record the nodes which have the same hit record, the > hugepage allocation could fallback to those nodes. And remove > __GFP_THISNODE since it does disallow fallback. And if nodemask is > empty (no node is set), it means there is one single node has the most > hist record, the nodemask approach actually behaves like __GFP_THISNODE. > > Reported-by: syzbot+0044b22d177870ee974f@xxxxxxxxxxxxxxxxxxxxxxxxx > Suggested-by: Zach O'Keefe <zokeefe@xxxxxxxxxx> > Suggested-by: Michal Hocko <mhocko@xxxxxxxx> > Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> > --- > mm/khugepaged.c | 32 ++++++++++++++------------------ > 1 file changed, 14 insertions(+), 18 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index ea0d186bc9d4..572ce7dbf4b0 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -97,8 +97,8 @@ struct collapse_control { > /* Num pages scanned per node */ > u32 node_load[MAX_NUMNODES]; > > - /* Last target selected in hpage_collapse_find_target_node() */ > - int last_target_node; > + /* nodemask for allocation fallback */ > + nodemask_t alloc_nmask; This will eat another 1k on the stack on most configurations (NODE_SHIFT=10). Along with 4k of node_load this is quite a lot even on shallow call chains like madvise resp. khugepaged. I would just add a follow up patch which changes both node_load and alloc_nmask to dynamically allocated objects. Other than that LGTM. I thought we want to keep __GFP_THISNODE but after a closer look it seems that this flag is not really compatible with nodemask after all. node_zonelist() will simply return a trivial zone list for a single (preferred node) so no fallback to other nodes is possible. My bad to not realize it earlier. -- Michal Hocko SUSE Labs