* Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > [...] > > Ok, so I don't know all the issues, and in many ways I don't even really > care. You could do it other ways, I don't think this is a big deal. The > part I hate is the runtime hook into the core MM page allocation code, > so I'm just throwing out any random thing that comes to my mind that > could be used to avoid that part. So, my hope was that it's possible to have a single, simple, zero-cost runtime check [zero cost for already initialized pages], because it can be merged into already existing page flag mask checks present here and executed for every freshly allocated page: static inline int check_new_page(struct page *page) { if (unlikely(page_mapcount(page) | (page->mapping != NULL) | (atomic_read(&page->_count) != 0) | (page->flags & PAGE_FLAGS_CHECK_AT_PREP) | (mem_cgroup_bad_page_check(page)))) { bad_page(page); return 1; } return 0; } We already run this for every new page allocated and the initialization check could hide in PAGE_FLAGS_CHECK_AT_PREP in a zero-cost fashion. I'd not do any of the ensure_page_is_initialized() or __expand_page_initialization() complications in this patch-set - each page head represents itself and gets iterated when check_new_page() is done. During regular bootup we'd initialize like before, except we don't set up the page heads but memset() them to zero. With each page head 32 bytes this would mean 8 GB of page head memory to clear per 1 TB - with 16 TB that's 128 GB to clear - that ought to be possible to do rather quickly, perhaps with some smart SMP cross-call approach that makes sure that each memset is done in a node-local fashion. [*] Such an approach should IMO be far smaller and less invasive than the patches presented so far: it should be below 100 lines or so. I don't know why there's such a big difference between the theory I outlined and the invasive patch-set implemented so far in practice, perhaps I'm missing some complication. I was trying to probe that difference, before giving up on the idea and punting back to the async hotplug-ish approach which would obviously work well too. All in one, I think async init just hides the real problem - there's no way memory init should take this long. Thanks, Ingo [*] alternatively maybe the main performance problem is that node-local memory is set up on a remote (boot) node? In that case I'd try to optimize it by migrating the memory init code's current node by using set_cpus_allowed() to live migrate from node to node, tracking the node whose struct page array is being initialized. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>