On Tue, Nov 10, 2020 at 2:11 AM Oscar Salvador <osalvador@xxxxxxx> wrote: > > On Sun, Nov 08, 2020 at 10:11:00PM +0800, Muchun Song wrote: > > In the register_page_bootmem_memmap, the slab allocator is not ready > > yet. So when ALLOC_SPLIT_PTLOCKS, we use init_mm.page_table_lock. > > otherwise we use per page table lock(page->ptl). In the later patch, > > we will use the vmemmap page table lock to guard the splitting of > > the vmemmap huge PMD. > > I am not sure about this one. > Grabbing init_mm's pagetable lock for specific hugetlb operations does not > seem like a good idea, and we do not know how contented is that one. These APIs are used to guard the operations on vmemmap page tables. For now, it is only for specific hugetlb operations. But maybe in the future, someone also wants to modify the vmemmap page tables, he also can use these APIs. Yeah, we do not know how contented is init_mm's pagetable lock. Grabbing this one may not be a good idea. > > I think a better fit would be to find another hook to initialize > page_table_lock at a later stage. > Anyway, we do not need till we are going to perform an operation > on the range, right? Yeah. You are right. > > Unless I am missing something, this should be doable in hugetlb_init. > > hugetlb_init is part from a init_call that gets called during do_initcalls. > At this time, slab is fully operative. If we initialize the page_table_lock in the hugetlb_init, we need to walk the vmemmap page tables again. But the vmemmap pages size is small, maybe the overhead of this is also small. And doing this in hugetlb_init can make the code cleaner. Thanks very much. > > start_kernel > kmem_cache_init_late > kmem_cache_init_late > ... > arch_call_rest_init > rest_init > kernel_init_freeable > do_basic_setup > do_initcalls > hugetlb_init > > -- > Oscar Salvador > SUSE L3 -- Yours, Muchun