On 12/3/24 10:47, David Hildenbrand wrote: > It's all a bit complicated for alloc_contig_range(). For example, we don't > support many flags, so let's start bailing out on unsupported > ones -- ignoring the placement hints, as we are already given the range > to allocate. > > While we currently set cc.gfp_mask, in __alloc_contig_migrate_range() we > simply create yet another GFP mask whereby we ignore the reclaim flags > specify by the caller. That looks very inconsistent. > > Let's clean it up, constructing the gfp flags used for > compaction/migration exactly once. Update the documentation of the > gfp_mask parameter for alloc_contig_range() and alloc_contig_pages(). > > Acked-by: Zi Yan <ziy@xxxxxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx> > + /* > + * Flags to control page compaction/migration/reclaim, to free up our > + * page range. Migratable pages are movable, __GFP_MOVABLE is implied > + * for them. > + * > + * Traditionally we always had __GFP_HARDWALL|__GFP_RETRY_MAYFAIL set, > + * keep doing that to not degrade callers. > + */ Wonder if we could revisit that eventually. Why limit migration targets by cpuset via __GFP_HARDWALL if we were not called with __GFP_HARDWALL? And why weaken the attempts with __GFP_RETRY_MAYFAIL if we didn't specify it? Unless I'm missing something, cc->gfp is only checked for __GFP_FS and __GFP_NOWARN in few places, so it's mostly migration_target_control the callers could meaningfully influence. > + *gfp_cc_mask = (gfp_mask & (reclaim_mask | cc_action_mask)) | > + __GFP_HARDWALL | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; > + return 0; > +} > + > /** > * alloc_contig_range() -- tries to allocate given range of pages > * @start: start PFN to allocate > @@ -6398,7 +6431,9 @@ static void split_free_pages(struct list_head *list) > * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks > * in range must have the same migratetype and it must > * be either of the two. > - * @gfp_mask: GFP mask to use during compaction > + * @gfp_mask: GFP mask. Node/zone/placement hints are ignored; only some > + * action and reclaim modifiers are supported. Reclaim modifiers > + * control allocation behavior during compaction/migration/reclaim. > * > * The PFN range does not have to be pageblock aligned. The PFN range must > * belong to a single zone. > @@ -6424,11 +6459,14 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, > .mode = MIGRATE_SYNC, > .ignore_skip_hint = true, > .no_set_skip_hint = true, > - .gfp_mask = current_gfp_context(gfp_mask), > .alloc_contig = true, > }; > INIT_LIST_HEAD(&cc.migratepages); > > + gfp_mask = current_gfp_context(gfp_mask); > + if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask)) > + return -EINVAL; > + > /* > * What we do here is we mark all pageblocks in range as > * MIGRATE_ISOLATE. Because pageblock and max order pages may > @@ -6571,7 +6609,9 @@ static bool zone_spans_last_pfn(const struct zone *zone, > /** > * alloc_contig_pages() -- tries to find and allocate contiguous range of pages > * @nr_pages: Number of contiguous pages to allocate > - * @gfp_mask: GFP mask to limit search and used during compaction > + * @gfp_mask: GFP mask. Node/zone/placement hints limit the search; only some > + * action and reclaim modifiers are supported. Reclaim modifiers > + * control allocation behavior during compaction/migration/reclaim. > * @nid: Target node > * @nodemask: Mask for other possible nodes > *