On Mon, Jul 9, 2018 at 5:56 AM, Jan Kara <jack@xxxxxxx> wrote: > On Wed 04-07-18 23:49:02, Dan Williams wrote: >> In order to keep pfn_to_page() a simple offset calculation the 'struct >> page' memmap needs to be mapped and initialized in advance of any usage >> of a page. This poses a problem for large memory systems as it delays >> full availability of memory resources for 10s to 100s of seconds. >> >> For typical 'System RAM' the problem is mitigated by the fact that large >> memory allocations tend to happen after the kernel has fully initialized >> and userspace services / applications are launched. A small amount, 2GB >> of memory, is initialized up front. The remainder is initialized in the >> background and freed to the page allocator over time. >> >> Unfortunately, that scheme is not directly reusable for persistent >> memory and dax because userspace has visibility to the entire resource >> pool and can choose to access any offset directly at its choosing. In >> other words there is no allocator indirection where the kernel can >> satisfy requests with arbitrary pages as they become initialized. >> >> That said, we can approximate the optimization by performing the >> initialization in the background, allow the kernel to fully boot the >> platform, start up pmem block devices, mount filesystems in dax mode, >> and only incur the delay at the first userspace dax fault. >> >> With this change an 8 socket system was observed to initialize pmem >> namespaces in ~4 seconds whereas it was previously taking ~4 minutes. >> >> These patches apply on top of the HMM + devm_memremap_pages() reworks >> [1]. Andrew, once the reviews come back, please consider this series for >> -mm as well. >> >> [1]: https://lkml.org/lkml/2018/6/19/108 > > One question: Why not (in addition to background initialization) have > ->direct_access() initialize a block of struct pages around the pfn it > needs if it finds it's not initialized yet? That would make devices usable > immediately without waiting for init to complete... Hmm, yes, relatively immediately... it would depend on the granularity of the tracking where we can reliably steal initialization work from the background thread. I'll give it a shot, I'm thinking dividing each thread's work into 64 sub-units and track those units with a bitmap. The worst case init time then becomes the time to initialize the pages for a range that is namespace-size / (NR_MEMMAP_THREADS * 64).