Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 10, 2024 at 11:01:06AM +0000, Ryan Roberts wrote:
> > So after my patch, instead of calling (in order):
> > 
> > 	page_cache_release(folio);
> > 	folio_undo_large_rmappable(folio);
> > 	mem_cgroup_uncharge(folio);
> > 	free_unref_page()
> > 
> > it calls:
> > 
> > 	__page_cache_release(folio, &lruvec, &flags);
> > 	mem_cgroup_uncharge_folios()
> > 	folio_undo_large_rmappable(folio);
> 
> I was just looking at this again, and something pops out...
> 
> You have swapped the order of folio_undo_large_rmappable() and
> mem_cgroup_uncharge(). But folio_undo_large_rmappable() calls
> get_deferred_split_queue() which tries to get the split queue from
> folio_memcg(folio) first and falls back to pgdat otherwise. If you are now
> calling mem_cgroup_uncharge_folios() first, will that remove the folio from the
> cgroup? Then we are operating on the wrong list? (just a guess based on the name
> of the function...)

Oh my.  You've got it.  This explains everything.  Thank you!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux