On Thu, Apr 18, 2024 at 12:46:39AM +0100, Matthew Wilcox wrote: > On Wed, Apr 17, 2024 at 05:18:34PM -0400, Peter Xu wrote: > > Current mm-unstable will hit this when running test_hugetlb_memcg. This > > fixes the crash for me. [1] > > > > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> > > --- > > mm/memcontrol.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 1840ba4c355d..7703ced535a3 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -7529,6 +7529,7 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) > > > > VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); > > VM_BUG_ON_FOLIO(folio_order(folio) > 1 && > > + !folio_test_hugetlb(folio) && > > !list_empty(&folio->_deferred_list), folio); > > Hum. I thought we didn't get here for hugetlb folios. What > stacktrace did you get? A normal hugetlb free path iirc. You can try the test case, I mentioned the reproducer [1] above. It crashes constantly. > > I'm basing it on comments like this: > > /* hugetlb has its own memcg */ > if (folio_test_hugetlb(folio)) { > if (lruvec) { > unlock_page_lruvec_irqrestore(lruvec, flags); > lruvec = NULL; > } > free_huge_folio(folio); > continue; > } Hugetlb does have its own memcg but I guess now it's even more than that, see the patch merged months ago: https://lore.kernel.org/all/20231006184629.155543-4-nphamcs@xxxxxxxxx/ Especially: @@ -1902,6 +1902,7 @@ void free_huge_folio(struct folio *folio) pages_per_huge_page(h), folio); hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), pages_per_huge_page(h), folio); + mem_cgroup_uncharge(folio); if (restore_reserve) h->resv_huge_pages++; The comment above looks a bit confusing to me, as memcg is not the only thing special for hugetlb pages. Explicitly mentioning it there made me feel like hugetlb can be freed like a normal compound page if memcg is not a problem, but looks like not yet? -- Peter Xu