Re: [External] Re: [v1 4/6] memblock: introduce MEMBLOCK_RSRV_NOINIT flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 28/07/2023 05:30, Mika Penttilä wrote:
Hi,

On 7/27/23 23:46, Usama Arif wrote:

For reserved memory regions marked with this flag,
reserve_bootmem_region is not called during memmap_init_reserved_pages.
This can be used to avoid struct page initialization for
regions which won't need them, for e.g. hugepages with
HVO enabled.

Signed-off-by: Usama Arif <usama.arif@xxxxxxxxxxxxx>
---
  include/linux/memblock.h |  7 +++++++
  mm/memblock.c            | 32 ++++++++++++++++++++++++++------
  2 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f71ff9f0ec81..7f9d06c08592 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -47,6 +47,7 @@ enum memblock_flags {
      MEMBLOCK_MIRROR        = 0x2,    /* mirrored region */
      MEMBLOCK_NOMAP        = 0x4,    /* don't add to kernel direct mapping */       MEMBLOCK_DRIVER_MANAGED = 0x8,    /* always detected via a driver */ +    MEMBLOCK_RSRV_NOINIT    = 0x10,    /* don't call reserve_bootmem_region for this region */
  };
  /**
@@ -125,6 +126,7 @@ int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size);
  int memblock_mark_mirror(phys_addr_t base, phys_addr_t size);
  int memblock_mark_nomap(phys_addr_t base, phys_addr_t size);
  int memblock_clear_nomap(phys_addr_t base, phys_addr_t size);
+int memblock_rsrv_mark_noinit(phys_addr_t base, phys_addr_t size);
  void memblock_free_all(void);
  void memblock_free(void *ptr, size_t size);
@@ -259,6 +261,11 @@ static inline bool memblock_is_nomap(struct memblock_region *m)
      return m->flags & MEMBLOCK_NOMAP;
  }
+static inline bool memblock_is_noinit(struct memblock_region *m)
+{
+    return m->flags & MEMBLOCK_RSRV_NOINIT;
+}
+
  static inline bool memblock_is_driver_managed(struct memblock_region *m)
  {
      return m->flags & MEMBLOCK_DRIVER_MANAGED;
diff --git a/mm/memblock.c b/mm/memblock.c
index 4fd431d16ef2..3a15708af3b6 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -997,6 +997,22 @@ int __init_memblock memblock_clear_nomap(phys_addr_t base, phys_addr_t size)
      return memblock_setclr_flag(base, size, 0, MEMBLOCK_NOMAP, 0);
  }
+/**
+ * memblock_rsrv_mark_noinit - Mark a reserved memory region with flag MEMBLOCK_RSRV_NOINIT.
+ * @base: the base phys addr of the region
+ * @size: the size of the region
+ *
+ * For memory regions marked with %MEMBLOCK_RSRV_NOINIT, reserve_bootmem_region + * is not called during memmap_init_reserved_pages, hence struct pages are not
+ * initialized for this region.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+int __init_memblock memblock_rsrv_mark_noinit(phys_addr_t base, phys_addr_t size)
+{
+    return memblock_setclr_flag(base, size, 1, MEMBLOCK_RSRV_NOINIT, 1);
+}
+
  static bool should_skip_region(struct memblock_type *type,
                     struct memblock_region *m,
                     int nid, int flags)
@@ -2113,13 +2129,17 @@ static void __init memmap_init_reserved_pages(void)
          memblock_set_node(start, end, &memblock.reserved, nid);
      }
-    /* initialize struct pages for the reserved regions */
+    /*
+     * initialize struct pages for reserved regions that don't have
+     * the MEMBLOCK_RSRV_NOINIT flag set
+     */
      for_each_reserved_mem_region(region) {
-        nid = memblock_get_region_node(region);
-        start = region->base;
-        end = start + region->size;
-
-        reserve_bootmem_region(start, end, nid);
+        if (!memblock_is_noinit(region)) {
+            nid = memblock_get_region_node(region);
+            start = region->base;
+            end = start + region->size;
+            reserve_bootmem_region(start, end, nid);
+        }
      }
  }

There's code like:

static inline void free_vmemmap_page(struct page *page)
{
         if (PageReserved(page))
                 free_bootmem_page(page);
         else
                 __free_page(page);
}

which depends on the PageReserved being in vmempages pages, so I think you can't skip that part?


free_vmemmap_page_list (free_vmemmap_page) is called on struct pages (refer to as [1]) that point to memory *which contains* the struct pages (refer to as [2]) for the hugepage. The above if (!memblock_is_noinit(region)) to not reserve_bootmem_region is called for the struct pages [2] for the hugepage. struct pages [1] are not changed with my patch.

As an experiment if I run the diff at the bottom with and without these patches I get the same log "HugeTLB: reserved pages 4096, normal pages 0", which means those struct pages are treated the same without and without these patches. (Its 4096 as 262144 struct pages [2] per hugepage * 64 bytes per struct page / PAGE_SIZE = 4096 struct pages [1] )

Also should have mentioned in cover letter, I used cat /proc/meminfo to make sure it was working as expected. Reserving 500 1G hugepages with and without these patches when hugetlb_free_vmemmap=on
MemTotal:       536207112 kB (511.4G)

when hugetlb_free_vmemmap=off
MemTotal:       528015112 kB (503G)


The expectation is that for 500 1G hugepages, when using HVO we have a saving of 16380K*500=~8GB which is what we see with and without those patches (511.4G - 503G). These patches didnt affect these numbers.



diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b5b7834e0f42..bc0ec90552b7 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -208,6 +208,8 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
        return 0;
 }

+static int i = 0, j = 0;
+
 /*
  * Free a vmemmap page. A vmemmap page can be allocated from the memblock
  * allocator or buddy allocator. If the PG_reserved flag is set, it means
@@ -216,10 +218,14 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end,
  */
 static inline void free_vmemmap_page(struct page *page)
 {
-       if (PageReserved(page))
+       if (PageReserved(page)) {
+               i++;
                free_bootmem_page(page);
-       else
+       }
+       else {
+               j++;
                __free_page(page);
+       }
 }

 /* Free a list of the vmemmap pages */
@@ -380,6 +386,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,

        free_vmemmap_page_list(&vmemmap_pages);

+       pr_err("reserved pages %u, normal pages %u", i, j);
        return ret;
 }





--Mika






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux