On Sun, Nov 08, 2020 at 10:11:00PM +0800, Muchun Song wrote: > In the register_page_bootmem_memmap, the slab allocator is not ready > yet. So when ALLOC_SPLIT_PTLOCKS, we use init_mm.page_table_lock. > otherwise we use per page table lock(page->ptl). In the later patch, > we will use the vmemmap page table lock to guard the splitting of > the vmemmap huge PMD. I am not sure about this one. Grabbing init_mm's pagetable lock for specific hugetlb operations does not seem like a good idea, and we do not know how contented is that one. I think a better fit would be to find another hook to initialize page_table_lock at a later stage. Anyway, we do not need till we are going to perform an operation on the range, right? Unless I am missing something, this should be doable in hugetlb_init. hugetlb_init is part from a init_call that gets called during do_initcalls. At this time, slab is fully operative. start_kernel kmem_cache_init_late kmem_cache_init_late ... arch_call_rest_init rest_init kernel_init_freeable do_basic_setup do_initcalls hugetlb_init -- Oscar Salvador SUSE L3