The patch titled Subject: mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix has been removed from the -mm tree. Its filename was mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests-fix.patch This patch was dropped because it was folded into mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch ------------------------------------------------------ From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Subject: mm, page_alloc: only use per-cpu allocator for irq-safe requests -fix preempt_enable_no_resched() was used based on review feedback that had no strong objection at the time. The thinking was that it avoided adding a preemption point where one didn't exist before so the feedback was applied. This reasoning was wrong. There was an indirect preemption point as explained by Thomas Gleixner where an interrupt could set_need_resched() followed by preempt_enable being a preemption point that matters. This use of preempt_enable_no_resched is bad from both a mainline and RT perspective and a violation of the preemption mechanism. Peter Zijlstra noted that "the only acceptable use of preempt_enable_no_resched() is if the next statement is a schedule() variant". The usage was outright broken and I should have stuck to preempt_enable() as it was originally developed. It's known from previous tests that there was no detectable difference to the performance by using preempt_enable_no_resched(). This is a fix to the mmotm patch mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch Link: http://lkml.kernel.org/r/20170208143128.25ahymqlyspjcixu@xxxxxxxxxxxxxxxxxxx Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Acked-by: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff -puN mm/page_alloc.c~mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests-fix mm/page_alloc.c --- a/mm/page_alloc.c~mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests-fix +++ a/mm/page_alloc.c @@ -2517,7 +2517,7 @@ void free_hot_cold_page(struct page *pag } out: - preempt_enable_no_resched(); + preempt_enable(); } /* @@ -2683,7 +2683,7 @@ static struct page *rmqueue_pcplist(stru __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone); } - preempt_enable_no_resched(); + preempt_enable(); return page; } _ Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are mm-page_alloc-split-buffered_rmqueue.patch mm-page_alloc-split-alloc_pages_nodemask.patch mm-page_alloc-drain-per-cpu-pages-from-workqueue-context.patch mm-page_alloc-only-use-per-cpu-allocator-for-irq-safe-requests.patch mm-page_alloc-use-static-global-work_struct-for-draining-per-cpu-pages.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html