In a nutshell: 4k is too small and 2M is too big. We started asking ourselves whether there was something in the middle that we could do. This series shows what that middle ground might look like. It provides some of the benefits of THP while eliminating some of the downsides. This series uses "multiple consecutive pages" (mcpages) of between 8K and 2M of base pages for anonymous user space mappings. This will lead to less internal fragmentation versus 2M mappings and thus less memory consumption and wasted CPU time zeroing memory which will never be used. In the implementation, we allocate high order page with order of mcpage (e.g., order 2 for 16KB mcpage). This makes sure the physical contiguous memory is used and benefit sequential memory access latency. Then split the high order page. By doing this, the sub-page of mcpage is just 4K normal page. The current kernel page management is applied to "mc" pages without any changes. Batching page faults is allowed with mcpage and reduce page faults number. There are costs with mcpage. Besides no TLB benefit THP brings, it increases memory consumption and latency of allocation page comparing to 4K base page. This series is the first step of mcpage. The furture work can be enable mcpage for more components like page cache, swapping etc. Finally, most pages in system will be allocated/free/reclaimed with mcpage order. The series is constructed as following: Patch 1 add the mcpage size related definitions and Kconfig entry Patch 2 specific for x86_64 to align mmap start address to mcpage size Patch 3 is the main change. It adds code to hook to anonymous page fault handle and apply mcpage to anonymous mapping Patch 4 adds some statistic of mcpage The overall code change is quite straight forward. The most thing I like to hear here is whether this is a right direction I can go further. This series does not leverage compound pages. This means that normal kernel code that encounters an 'mcpage' region does not need to do anything special. It also does not leverage folios, although trying to leverage folios is something that we would like to explore. We would welcome input on how that might happen. Some performance data were collected with 16K mcpage size and shown in patch 2/4 and 4/4. If you have other workload and like to know the impact, just let me know. I can setup the env and run the test. Yin Fengwei (4): mcpage: add size/mask/shift definition for multiple consecutive page mcpage: anon page: Use mcpage for anonymous mapping mcpage: add vmstat counters for mcpages mcpage: get_unmapped_area return mcpage size aligned addr arch/x86/kernel/sys_x86_64.c | 8 ++ include/linux/gfp.h | 5 ++ include/linux/mcpage_mm.h | 35 +++++++++ include/linux/mm_types.h | 11 +++ include/linux/vm_event_item.h | 10 +++ mm/Kconfig | 19 +++++ mm/Makefile | 1 + mm/mcpage_memory.c | 140 ++++++++++++++++++++++++++++++++++ mm/memory.c | 12 +++ mm/mempolicy.c | 51 +++++++++++++ mm/vmstat.c | 7 ++ 11 files changed, 299 insertions(+) create mode 100644 include/linux/mcpage_mm.h create mode 100644 mm/mcpage_memory.c base-commit: b7bfaa761d760e72a969d116517eaa12e404c262 -- 2.30.2