The patch titled Subject: mm/page_alloc: introduce post allocation processing on page allocator has been removed from the -mm tree. Its filename was mm-page_alloc-introduce-post-allocation-processing-on-page-allocator.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Subject: mm/page_alloc: introduce post allocation processing on page allocator This patch is motivated from Hugh and Vlastimil's concern [1]. There are two ways to get freepage from the allocator. One is using normal memory allocation API and the other is __isolate_free_page() which is internally used for compaction and pageblock isolation. Later usage is rather tricky since it doesn't do whole post allocation processing done by normal API. One problematic thing I already know is that poisoned page would not be checked if it is allocated by __isolate_free_page(). Perhaps, there would be more. We could add more debug logic for allocated page in the future and this separation would cause more problem. I'd like to fix this situation at this time. Solution is simple. This patch commonize some logic for newly allocated page and uses it on all sites. This will solve the problem. [1] http://marc.info/?i=alpine.LSU.2.11.1604270029350.7066%40eggly.anvils%3E [iamjoonsoo.kim@xxxxxxx: mm-page_alloc-introduce-post-allocation-processing-on-page-allocator-v3] Link: http://lkml.kernel.org/r/1464230275-25791-7-git-send-email-iamjoonsoo.kim@xxxxxxx Link: http://lkml.kernel.org/r/1466150259-27727-9-git-send-email-iamjoonsoo.kim@xxxxxxx Link: http://lkml.kernel.org/r/1464230275-25791-7-git-send-email-iamjoonsoo.kim@xxxxxxx Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/compaction.c | 8 +------- mm/internal.h | 2 ++ mm/page_alloc.c | 23 ++++++++++++++--------- mm/page_isolation.c | 4 +--- 4 files changed, 18 insertions(+), 19 deletions(-) diff -puN mm/compaction.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator mm/compaction.c --- a/mm/compaction.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator +++ a/mm/compaction.c @@ -74,14 +74,8 @@ static void map_pages(struct list_head * order = page_private(page); nr_pages = 1 << order; - set_page_private(page, 0); - set_page_refcounted(page); - arch_alloc_page(page, order); - kernel_map_pages(page, nr_pages, 1); - kasan_alloc_pages(page, order); - - set_page_owner(page, order, __GFP_MOVABLE); + post_alloc_hook(page, order, __GFP_MOVABLE); if (order) split_page(page, order); diff -puN mm/internal.h~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator mm/internal.h --- a/mm/internal.h~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator +++ a/mm/internal.h @@ -150,6 +150,8 @@ extern int __isolate_free_page(struct pa extern void __free_pages_bootmem(struct page *page, unsigned long pfn, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); +extern void post_alloc_hook(struct page *page, unsigned int order, + gfp_t gfp_flags); extern int user_min_free_kbytes; #if defined CONFIG_COMPACTION || defined CONFIG_CMA diff -puN mm/page_alloc.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator mm/page_alloc.c --- a/mm/page_alloc.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator +++ a/mm/page_alloc.c @@ -1724,6 +1724,19 @@ static bool check_new_pages(struct page return false; } +inline void post_alloc_hook(struct page *page, unsigned int order, + gfp_t gfp_flags) +{ + set_page_private(page, 0); + set_page_refcounted(page); + + arch_alloc_page(page, order); + kernel_map_pages(page, 1 << order, 1); + kernel_poison_pages(page, 1 << order, 1); + kasan_alloc_pages(page, order); + set_page_owner(page, order, gfp_flags); +} + static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags) { @@ -1736,13 +1749,7 @@ static void prep_new_page(struct page *p poisoned &= page_is_poisoned(p); } - set_page_private(page, 0); - set_page_refcounted(page); - - arch_alloc_page(page, order); - kernel_map_pages(page, 1 << order, 1); - kernel_poison_pages(page, 1 << order, 1); - kasan_alloc_pages(page, order); + post_alloc_hook(page, order, gfp_flags); if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO)) for (i = 0; i < (1 << order); i++) @@ -1751,8 +1758,6 @@ static void prep_new_page(struct page *p if (order && (gfp_flags & __GFP_COMP)) prep_compound_page(page, order); - set_page_owner(page, order, gfp_flags); - /* * page is set pfmemalloc when ALLOC_NO_WATERMARKS was necessary to * allocate the page. The expectation is that the caller is taking diff -puN mm/page_isolation.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator mm/page_isolation.c --- a/mm/page_isolation.c~mm-page_alloc-introduce-post-allocation-processing-on-page-allocator +++ a/mm/page_isolation.c @@ -128,9 +128,7 @@ static void unset_migratetype_isolate(st out: spin_unlock_irqrestore(&zone->lock, flags); if (isolated_page) { - kernel_map_pages(page, (1 << order), 1); - set_page_refcounted(page); - set_page_owner(page, order, __GFP_MOVABLE); + post_alloc_hook(page, order, __GFP_MOVABLE); __free_pages(isolated_page, order); } } _ Patches currently in -mm which might be from iamjoonsoo.kim@xxxxxxx are -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html