On Wed 04-07-18 23:49:02, Dan Williams wrote: > In order to keep pfn_to_page() a simple offset calculation the 'struct > page' memmap needs to be mapped and initialized in advance of any usage > of a page. This poses a problem for large memory systems as it delays > full availability of memory resources for 10s to 100s of seconds. > > For typical 'System RAM' the problem is mitigated by the fact that large > memory allocations tend to happen after the kernel has fully initialized > and userspace services / applications are launched. A small amount, 2GB > of memory, is initialized up front. The remainder is initialized in the > background and freed to the page allocator over time. > > Unfortunately, that scheme is not directly reusable for persistent > memory and dax because userspace has visibility to the entire resource > pool and can choose to access any offset directly at its choosing. In > other words there is no allocator indirection where the kernel can > satisfy requests with arbitrary pages as they become initialized. > > That said, we can approximate the optimization by performing the > initialization in the background, allow the kernel to fully boot the > platform, start up pmem block devices, mount filesystems in dax mode, > and only incur the delay at the first userspace dax fault. > > With this change an 8 socket system was observed to initialize pmem > namespaces in ~4 seconds whereas it was previously taking ~4 minutes. > > These patches apply on top of the HMM + devm_memremap_pages() reworks > [1]. Andrew, once the reviews come back, please consider this series for > -mm as well. > > [1]: https://lkml.org/lkml/2018/6/19/108 One question: Why not (in addition to background initialization) have ->direct_access() initialize a block of struct pages around the pfn it needs if it finds it's not initialized yet? That would make devices usable immediately without waiting for init to complete... Honza > > --- > > Dan Williams (9): > mm: Plumb dev_pagemap instead of vmem_altmap to memmap_init_zone() > mm: Enable asynchronous __add_pages() and vmemmap_populate_hugepages() > mm: Teach memmap_init_zone() to initialize ZONE_DEVICE pages > mm: Multithread ZONE_DEVICE initialization > mm: Allow an external agent to wait for memmap initialization > filesystem-dax: Make mount time pfn validation a debug check > libnvdimm, pmem: Initialize the memmap in the background > device-dax: Initialize the memmap in the background > libnvdimm, namespace: Publish page structure init state / control > > Huaisheng Ye (4): > nvdimm/pmem: check the validity of the pointer pfn > nvdimm/pmem-dax: check the validity of the pointer pfn > s390/block/dcssblk: check the validity of the pointer pfn > fs/dax: Assign NULL to pfn of dax_direct_access if useless > > > arch/ia64/mm/init.c | 5 + > arch/powerpc/mm/mem.c | 5 + > arch/s390/mm/init.c | 8 + > arch/sh/mm/init.c | 5 + > arch/x86/mm/init_32.c | 8 + > arch/x86/mm/init_64.c | 27 +++-- > drivers/dax/Kconfig | 10 ++ > drivers/dax/dax-private.h | 2 > drivers/dax/device-dax.h | 2 > drivers/dax/device.c | 16 +++ > drivers/dax/pmem.c | 5 + > drivers/dax/super.c | 64 +++++++----- > drivers/nvdimm/nd.h | 2 > drivers/nvdimm/pfn_devs.c | 54 ++++++++-- > drivers/nvdimm/pmem.c | 17 ++- > drivers/nvdimm/pmem.h | 1 > drivers/s390/block/dcssblk.c | 5 + > fs/dax.c | 10 +- > include/linux/memmap_async.h | 55 ++++++++++ > include/linux/memory_hotplug.h | 18 ++- > include/linux/memremap.h | 31 ++++++ > include/linux/mm.h | 8 + > kernel/memremap.c | 85 ++++++++------- > mm/memory_hotplug.c | 73 ++++++++++--- > mm/page_alloc.c | 215 +++++++++++++++++++++++++++++++++------ > mm/sparse-vmemmap.c | 56 ++++++++-- > tools/testing/nvdimm/pmem-dax.c | 11 ++ > 27 files changed, 610 insertions(+), 188 deletions(-) > create mode 100644 include/linux/memmap_async.h -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR