From: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> Subject: mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page set_hugetlb_cgroup_[rsvd] just manipulate page local data, which is not necessary to be protected by hugetlb_lock. Let's take this out. Link: https://lkml.kernel.org/r/20200831022351.20916-7-richard.weiyang@xxxxxxxxxxxxxxxxx Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> Reviewed-by: Baoquan He <bhe@xxxxxxxxxx> Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/hugetlb.c~mm-hugetlb-narrow-the-hugetlb_lock-protection-area-during-preparing-huge-page +++ a/mm/hugetlb.c @@ -1504,9 +1504,9 @@ static void prep_new_huge_page(struct hs { INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - spin_lock(&hugetlb_lock); set_hugetlb_cgroup(page, NULL); set_hugetlb_cgroup_rsvd(page, NULL); + spin_lock(&hugetlb_lock); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; spin_unlock(&hugetlb_lock); _