On Wed, Feb 22, 2023 at 12:06:20PM -0500, Johannes Weiner wrote: > Hello, > > On Tue, Feb 21, 2023 at 04:43:44PM -0500, Peter Xu wrote: > > If memory charge failed, the caller shouldn't call mem_cgroup_uncharge(). > > Let alloc_charge_hpage() handle the error itself and clear hpage properly > > if mem charge fails. > > I'm a bit confused by this patch. > > There isn't anything wrong with calling mem_cgroup_uncharge() on an > uncharged page, functionally. It checks and bails out. Indeed, I didn't really notice there's zero side effect of calling that, sorry. In that case both "Fixes" and "Cc: stable" do not apply. > > It's an unnecessary call of course, but since it's an error path it's > also not a cost issue, either. > > I could see an argument for improving the code, but this is actually > more code, and the caller still has the uncharge-and-put branch anyway > for when the collapse fails later on. > > So I'm not sure I understand the benefit of this change. Yes, the benefit is having a clear interface for alloc_charge_hpage() with no prone to leaking huge page. The patch comes from a review for David's other patch here: https://lore.kernel.org/all/Y%2FU9fBxVJdhxiZ1v@x1n/ I've attached a new version just to reword and remove the inproper tags. Do you think that's acceptable? Thanks, -- Peter Xu
>From 0595acbd688b60ff7b2821a073c0fe857a4ae0ee Mon Sep 17 00:00:00 2001 From: Peter Xu <peterx@xxxxxxxxxx> Date: Tue, 21 Feb 2023 16:43:44 -0500 Subject: [PATCH] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors If memory charge failed, instead of returning the hpage but with an error, allow the function to cleanup the folio properly, which is normally what a function should do in this case - either return successfully, or return with no side effect of partial runs with an indicated error. This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily with either anon or shmem path (even if it's safe to do so). Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: David Stevens <stevensd@xxxxxxxxxxxx> Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> --- mm/khugepaged.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8dbc39896811..941d1c7ea910 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); int node = hpage_collapse_find_target_node(cc); + struct folio *folio; if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask)) return SCAN_ALLOC_HUGE_PAGE_FAIL; - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp))) + + folio = page_folio(*hpage); + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { + folio_put(folio); + *hpage = NULL; return SCAN_CGROUP_CHARGE_FAIL; + } count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC); + return SCAN_SUCCEED; } -- 2.39.1