The patch titled Subject: mm/page_alloc: rename alloc_mask to alloc_gfp has been added to the -mm tree. Its filename is mm-page_alloc-rename-alloc_mask-to-alloc_gfp.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-rename-alloc_mask-to-alloc_gfp.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-rename-alloc_mask-to-alloc_gfp.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm/page_alloc: rename alloc_mask to alloc_gfp Patch series "Rationalise __alloc_pages wrappers", v3. I was poking around the __alloc_pages variants trying to understand why they each exist, and couldn't really find a good justification for keeping __alloc_pages and __alloc_pages_nodemask as separate functions. That led to getting rid of alloc_pages_current() and then I noticed the documentation was bad, and then I noticed the mempolicy documentation wasn't included. Anyway, this is all cleanups & doc fixes. This patch (of 7): We have two masks involved -- the nodemask and the gfp mask, so alloc_mask is an unclear name. Link: https://lkml.kernel.org/r/20210225150642.2582252-2-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-rename-alloc_mask-to-alloc_gfp +++ a/mm/page_alloc.c @@ -4920,7 +4920,7 @@ got_pg: static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid, nodemask_t *nodemask, - struct alloc_context *ac, gfp_t *alloc_mask, + struct alloc_context *ac, gfp_t *alloc_gfp, unsigned int *alloc_flags) { ac->highest_zoneidx = gfp_zone(gfp_mask); @@ -4929,7 +4929,7 @@ static inline bool prepare_alloc_pages(g ac->migratetype = gfp_migratetype(gfp_mask); if (cpusets_enabled()) { - *alloc_mask |= __GFP_HARDWALL; + *alloc_gfp |= __GFP_HARDWALL; /* * When we are in the interrupt context, it is irrelevant * to the current task context. It means that any node ok. @@ -4973,7 +4973,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, u { struct page *page; unsigned int alloc_flags = ALLOC_WMARK_LOW; - gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */ + gfp_t alloc_gfp; /* The gfp_t that was actually used for allocation */ struct alloc_context ac = { }; /* @@ -4986,8 +4986,9 @@ __alloc_pages_nodemask(gfp_t gfp_mask, u } gfp_mask &= gfp_allowed_mask; - alloc_mask = gfp_mask; - if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) + alloc_gfp = gfp_mask; + if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, + &alloc_gfp, &alloc_flags)) return NULL; /* @@ -4997,7 +4998,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, u alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask); /* First allocation attempt */ - page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); if (likely(page)) goto out; @@ -5007,7 +5008,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, u * from a particular context which has been marked by * memalloc_no{fs,io}_{save,restore}. */ - alloc_mask = current_gfp_context(gfp_mask); + alloc_gfp = current_gfp_context(gfp_mask); ac.spread_dirty_pages = false; /* @@ -5016,7 +5017,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, u */ ac.nodemask = nodemask; - page = __alloc_pages_slowpath(alloc_mask, order, &ac); + page = __alloc_pages_slowpath(alloc_gfp, order, &ac); out: if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page && @@ -5025,7 +5026,7 @@ out: page = NULL; } - trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); + trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); return page; } _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-filemap-use-filemap_read_page-in-filemap_fault.patch mm-filemap-drop-check-for-truncated-page-after-i-o.patch mm-page_alloc-rename-alloc_mask-to-alloc_gfp.patch mm-page_alloc-rename-gfp_mask-to-gfp.patch mm-page_alloc-combine-__alloc_pages-and-__alloc_pages_nodemask.patch mm-mempolicy-rename-alloc_pages_current-to-alloc_pages.patch mm-mempolicy-rewrite-alloc_pages-documentation.patch mm-mempolicy-rewrite-alloc_pages_vma-documentation.patch mm-mempolicy-fix-mpol_misplaced-kernel-doc.patch