From: Steve Sistare <steven.sistare@xxxxxxxxxx> commit dc677b5f3765cfd0944c8873d1ea57f1a3439676 upstream. The folio_try_get in memfd_alloc_folio is not necessary. Delete it, and delete the matching folio_put in memfd_pin_folios. This also avoids leaking a ref if the memfd_alloc_folio call to hugetlb_add_to_page_cache fails. That error path is also broken in a second way -- when its folio_put causes the ref to become 0, it will implicitly call free_huge_folio, but then the path *explicitly* calls free_huge_folio. Delete the latter. This is a continuation of the fix "mm/hugetlb: fix memfd_pin_folios free_huge_pages leak" [steven.sistare@xxxxxxxxxx: remove explicit call to free_huge_folio(), per Matthew] Link: https://lkml.kernel.org/r/Zti-7nPVMcGgpcbi@xxxxxxxxxxxxxxxxxxxx Link: https://lkml.kernel.org/r/1725481920-82506-1-git-send-email-steven.sistare@xxxxxxxxxx Link: https://lkml.kernel.org/r/1725478868-61732-1-git-send-email-steven.sistare@xxxxxxxxxx Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios") Signed-off-by: Steve Sistare <steven.sistare@xxxxxxxxxx> Suggested-by: Vivek Kasireddy <vivek.kasireddy@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 4 +--- mm/memfd.c | 3 +-- 2 files changed, 2 insertions(+), 5 deletions(-) --- a/mm/gup.c +++ b/mm/gup.c @@ -3618,7 +3618,7 @@ long memfd_pin_folios(struct file *memfd pgoff_t start_idx, end_idx, next_idx; struct folio *folio = NULL; struct folio_batch fbatch; - struct hstate *h = NULL; + struct hstate *h; long ret = -EINVAL; if (start < 0 || start > end || !max_folios) @@ -3662,8 +3662,6 @@ long memfd_pin_folios(struct file *memfd &fbatch); if (folio) { folio_put(folio); - if (h) - folio_put(folio); folio = NULL; } --- a/mm/memfd.c +++ b/mm/memfd.c @@ -89,13 +89,12 @@ struct folio *memfd_alloc_folio(struct f numa_node_id(), NULL, gfp_mask); - if (folio && folio_try_get(folio)) { + if (folio) { err = hugetlb_add_to_page_cache(folio, memfd->f_mapping, idx); if (err) { folio_put(folio); - free_huge_folio(folio); return ERR_PTR(err); } folio_unlock(folio); Patches currently in stable-queue which might be from steven.sistare@xxxxxxxxxx are queue-6.11/mm-gup-fix-memfd_pin_folios-alloc-race-panic.patch queue-6.11/mm-hugetlb-fix-memfd_pin_folios-resv_huge_pages-leak.patch queue-6.11/mm-hugetlb-simplify-refs-in-memfd_alloc_folio.patch queue-6.11/mm-hugetlb-fix-memfd_pin_folios-free_huge_pages-leak.patch queue-6.11/mm-filemap-fix-filemap_get_folios_contig-thp-panic.patch queue-6.11/mm-gup-fix-memfd_pin_folios-hugetlb-page-allocation.patch