Re: [PATCH v1] memblock: Initialized the memory of memblock.reserve to the MIGRATE_MOVABL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The main purpose of this patch is to unify the migration type settings for memblock.reserve to MIGRATE_MOVABL, both with and without CONFIG_DEFERRED_STRUCT_PAGE_INIT.

Thanks
suhua

suhua <suhua.tanke@xxxxxxxxx> 于2024年9月25日周三 19:02写道:
After sparse_init function requests memory for struct page in memblock and
adds it to memblock.reserved, this memory area is present in both
memblock.memory and memblock.reserved.

When CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set. The memmap_init function
is called during the initialization of the free area of the zone, this
function calls for_each_mem_pfn_range to initialize all memblock.memory,
excluding memory that is also placed in memblock.reserved, such as the
struct page metadata that describes the page, 1TB memory is about 16GB,
and generally this part of reserved memory occupies more than 90% of the
total reserved memory of the system. So all memory in memblock.memory is
set to MIGRATE_MOVABLE according to the alignment of pageblock_nr_pages.
For example, if hugetlb_optimize_vmemmap=1, huge pages are allocated, the
freed pages are placed on buddy's MIGRATE_MOVABL list for use.

When CONFIG_DEFERRED_STRUCT_PAGE_INIT=y, only the first_deferred_pfn range
is initialized in memmap_init. The subsequent free_low_memory_core_early
initializes all memblock.reserved memory but not MIGRATE_MOVABL. All
memblock.memory is set to MIGRATE_MOVABL when it is placed in buddy via
free_low_memory_core_early and deferred_init_memmap. As a result, when
hugetlb_optimize_vmemmap=1 and huge pages are allocated, the freed pages
will be placed on buddy's MIGRATE_UNMOVABL list (For example, on machines
with 1TB of memory, alloc 2MB huge page size of 1000GB frees up about 15GB
to MIGRATE_UNMOVABL). Since the huge page alloc requires a MIGRATE_MOVABL
page, a fallback is performed to alloc memory from MIGRATE_UNMOVABL for
MIGRATE_MOVABL.

Large amount of UNMOVABL memory is not conducive to defragmentation, so
the reserved memory is also set to MIGRATE_MOVABLE in the
free_low_memory_core_early phase following the alignment of
pageblock_nr_pages.

Eg:
echo 500000 > /proc/sys/vm/nr_hugepages
cat /proc/pagetypeinfo

before:
Free pages count per migrate type at order       0      1      2      3      4      5      6      7      8      9     10

Node    0, zone   Normal, type    Unmovable     51      2      1     28     53     35     35     43     40     69   3852
Node    0, zone   Normal, type      Movable   6485   4610    666    202    200    185    208     87     54      2    240
Node    0, zone   Normal, type  Reclaimable      2      2      1     23     13      1      2      1      0      1      0
Node    0, zone   Normal, type   HighAtomic      0      0      0      0      0      0      0      0      0      0      0
Node    0, zone   Normal, type      Isolate      0      0      0      0      0      0      0      0      0      0      0
Unmovable ≈ 15GB

after:
Free pages count per migrate type at order       0      1      2      3      4      5      6      7      8      9     10

Node    0, zone   Normal, type    Unmovable      0      1      1      0      0      0      0      1      1      1      0
Node    0, zone   Normal, type      Movable   1563   4107   1119    189    256    368    286    132    109      4   3841
Node    0, zone   Normal, type  Reclaimable      2      2      1     23     13      1      2      1      0      1      0
Node    0, zone   Normal, type   HighAtomic      0      0      0      0      0      0      0      0      0      0      0
Node    0, zone   Normal, type      Isolate      0      0      0      0      0      0      0      0      0      0      0

Signed-off-by: suhua <suhua1@xxxxxxxxxxxx>
---
 mm/mm_init.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index 4ba5607aaf19..e0190e3f8f26 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -722,6 +722,12 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid)
                if (zone_spans_pfn(zone, pfn))
                        break;
        }
+
+       if (pageblock_aligned(pfn)) {
+               set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_MOVABLE);
+               cond_resched();
+       }
+
        __init_single_page(pfn_to_page(pfn), pfn, zid, nid);
 }
 #else
--
2.34.1


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux