+ mm-remove-extra-drain-pages-on-pcp-list.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: remove extra drain pages on pcp list
has been added to the -mm tree.  Its filename is
     mm-remove-extra-drain-pages-on-pcp-list.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-remove-extra-drain-pages-on-pcp-list.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-remove-extra-drain-pages-on-pcp-list.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Wei Yang <richard.weiyang@xxxxxxxxx>
Subject: mm: remove extra drain pages on pcp list

In the current implementation, there are two places to isolate a range of
page: __offline_pages() and alloc_contig_range().  During this procedure,
it will drain pages on pcp list.

Below is a brief call flow:

  __offline_pages()/alloc_contig_range()
      start_isolate_page_range()
          set_migratetype_isolate()
              drain_all_pages()
      drain_all_pages()                 <--- A

This snippet shows the current logic is isolate and drain pcp list for
each pageblock and drain pcp list again for the whole range.

start_isolate_page_range is responsible for isolating the given pfn range.
One part of that job is to make sure that also pages that are on the
allocator pcp lists are properly isolated.  Otherwise they could be reused
and the range wouldn't be completely isolated until the memory is freed
back.  While there is no strict guarantee here because pages might get
allocated at any time before drain_all_pages is called there doesn't seem
to be any strong demand for such a guarantee.

In any case, draining is already done at the isolation level and there is
no need to do it again later by start_isolate_page_range callers (memory
hotplug and CMA allocator currently).  Therefore remove pointless draining
in existing callers to make the code more clear and functionally correct.

[mhocko@xxxxxxxx: provide a clearer changelog for the last two paragraphs]
Link: http://lkml.kernel.org/r/20190105233141.2329-1-richard.weiyang@xxxxxxxxx
Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx>
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Cc: Oscar Salvador <osalvador@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


--- a/mm/memory_hotplug.c~mm-remove-extra-drain-pages-on-pcp-list
+++ a/mm/memory_hotplug.c
@@ -1635,7 +1635,6 @@ static int __ref __offline_pages(unsigne
 
 			cond_resched();
 			lru_add_drain_all();
-			drain_all_pages(zone);
 
 			pfn = scan_movable_pages(pfn, end_pfn);
 			if (pfn) {
--- a/mm/page_alloc.c~mm-remove-extra-drain-pages-on-pcp-list
+++ a/mm/page_alloc.c
@@ -8196,7 +8196,6 @@ int alloc_contig_range(unsigned long sta
 	 */
 
 	lru_add_drain_all();
-	drain_all_pages(cc.zone);
 
 	order = 0;
 	outer_start = start;
_

Patches currently in -mm which might be from richard.weiyang@xxxxxxxxx are

mm-slub-make-the-comment-of-put_cpu_partial-complete.patch
mm-remove-extra-drain-pages-on-pcp-list.patch
mm-page_alloc-calculate-first_deferred_pfn-directly.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux