Patch "mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-hugetlb_cgroup-convert-__set_hugetlb_cgroup-to-fo.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit b1d4ed0d3b4dd8f80f83089504b348f4d996caba
Author: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>
Date:   Tue Nov 1 15:30:51 2022 -0700

    mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios
    
    [ Upstream commit a098c977722ca27d3b4bfeb966767af3cce45f85 ]
    
    Patch series "convert hugetlb_cgroup helper functions to folios", v2.
    
    This patch series continues the conversion of hugetlb code from being
    managed in pages to folios by converting many of the hugetlb_cgroup helper
    functions to use folios.  This allows the core hugetlb functions to pass
    in a folio to these helper functions.
    
    This patch (of 9);
    
    Change __set_hugetlb_cgroup() to use folios so it is explicit that the
    function operates on a head page.
    
    Link: https://lkml.kernel.org/r/20221101223059.460937-1-sidhartha.kumar@xxxxxxxxxx
    Link: https://lkml.kernel.org/r/20221101223059.460937-2-sidhartha.kumar@xxxxxxxxxx
    Signed-off-by: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>
    Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
    Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
    Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>
    Cc: Bui Quang Minh <minhquangbui99@xxxxxxxxx>
    Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
    Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx>
    Cc: Mina Almasry <almasrymina@xxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Stable-dep-of: b76b46902c2d ("mm/hugetlb: fix missing hugetlb_lock for resv uncharge")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 630cd255d0cfd..7576e9ed8afe7 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -90,31 +90,31 @@ hugetlb_cgroup_from_page_rsvd(struct page *page)
 	return __hugetlb_cgroup_from_page(page, true);
 }
 
-static inline void __set_hugetlb_cgroup(struct page *page,
+static inline void __set_hugetlb_cgroup(struct folio *folio,
 				       struct hugetlb_cgroup *h_cg, bool rsvd)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio);
 
-	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
+	if (folio_order(folio) < HUGETLB_CGROUP_MIN_ORDER)
 		return;
 	if (rsvd)
-		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+		set_page_private(folio_page(folio, SUBPAGE_INDEX_CGROUP_RSVD),
 				 (unsigned long)h_cg);
 	else
-		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+		set_page_private(folio_page(folio, SUBPAGE_INDEX_CGROUP),
 				 (unsigned long)h_cg);
 }
 
 static inline void set_hugetlb_cgroup(struct page *page,
 				     struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page, h_cg, false);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, false);
 }
 
 static inline void set_hugetlb_cgroup_rsvd(struct page *page,
 					  struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page, h_cg, true);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, true);
 }
 
 static inline bool hugetlb_cgroup_disabled(void)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index f61d132df52b3..b2316bcbf634a 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -314,7 +314,7 @@ static void __hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 	if (hugetlb_cgroup_disabled() || !h_cg)
 		return;
 
-	__set_hugetlb_cgroup(page, h_cg, rsvd);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, rsvd);
 	if (!rsvd) {
 		unsigned long usage =
 			h_cg->nodeinfo[page_to_nid(page)]->usage[idx];
@@ -356,7 +356,7 @@ static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
 	h_cg = __hugetlb_cgroup_from_page(page, rsvd);
 	if (unlikely(!h_cg))
 		return;
-	__set_hugetlb_cgroup(page, NULL, rsvd);
+	__set_hugetlb_cgroup(page_folio(page), NULL, rsvd);
 
 	page_counter_uncharge(__hugetlb_cgroup_counter_from_cgroup(h_cg, idx,
 								   rsvd),




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux