Re: [RFC PATCH 3/4] mm/page_alloc: introduce __GFP_PTE_MAPPED flag to allocate pte-mapped pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23.08.21 15:25, Mike Rapoport wrote:
From: Mike Rapoport <rppt@xxxxxxxxxxxxx>

When __GFP_PTE_MAPPED flag is passed to an allocation request of order 0,
the allocated page will be mapped at PTE level in the direct map.

To reduce the direct map fragmentation, maintain a cache of 4K pages that
are already mapped at PTE level in the direct map. Whenever the cache
should be replenished, try to allocate 2M page and split it to 4K pages
to localize shutter of the direct map. If the allocation of 2M page fails,
fallback to a single page allocation at expense of the direct map
fragmentation.

The cache registers a shrinker that releases free pages from the cache to
the page allocator.

The __GFP_PTE_MAPPED and caching of 4K pages are enabled only if an
architecture selects ARCH_WANTS_PTE_MAPPED_CACHE in its Kconfig.

[
cache management are mostly copied from
https://lore.kernel.org/lkml/20210505003032.489164-4-rick.p.edgecombe@xxxxxxxxx/
]

Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
---
  arch/Kconfig                    |   8 +
  arch/x86/Kconfig                |   1 +
  include/linux/gfp.h             |  11 +-
  include/linux/mm.h              |   2 +
  include/linux/pageblock-flags.h |  26 ++++
  init/main.c                     |   1 +
  mm/internal.h                   |   3 +-
  mm/page_alloc.c                 | 261 +++++++++++++++++++++++++++++++-
  8 files changed, 309 insertions(+), 4 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 129df498a8e1..2db95331201b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -243,6 +243,14 @@ config ARCH_HAS_SET_MEMORY
  config ARCH_HAS_SET_DIRECT_MAP
  	bool

[...]

+static int __pte_mapped_cache_init(struct pte_mapped_cache *cache)
+{
+	int err;
+
+	err = list_lru_init(&cache->lru);
+	if (err)
+		return err;
+
+	cache->shrinker.count_objects = pte_mapped_cache_shrink_count;
+	cache->shrinker.scan_objects = pte_mapped_cache_shrink_scan;
+	cache->shrinker.seeks = DEFAULT_SEEKS;
+	cache->shrinker.flags = SHRINKER_NUMA_AWARE;
+
+	err = register_shrinker(&cache->shrinker);
+	if (err)
+		goto err_list_lru_destroy;

With a shrinker in place, it really does somewhat feel like this should be a cache outside of the buddy. Or at least moved outside of page_alloc.c with a clean interface to work with the buddy.

But I only had a quick glimpse over this patch.

--
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux