The patch titled Subject: hugetlb: Simplify hugetlb_wp() arguments has been added to the -mm mm-unstable branch. Its filename is hugetlb-convert-hugetlb_wp-to-use-struct-vm_fault-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/hugetlb-convert-hugetlb_wp-to-use-struct-vm_fault-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Vishal Moola (Oracle)" <vishal.moola@xxxxxxxxx> Subject: hugetlb: Simplify hugetlb_wp() arguments Date: Mon, 8 Apr 2024 10:21:44 -0700 simplify the function arguments, per Oscar and Muchun. Link: https://lkml.kernel.org/r/ZhQtoFNZBNwBCeXn@fedora Signed-off-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Suggested-by: Muchun Song <muchun.song@xxxxxxxxx> Suggested-by: Oscar Salvador <osalvador@xxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/mm/hugetlb.c~hugetlb-convert-hugetlb_wp-to-use-struct-vm_fault-fix +++ a/mm/hugetlb.c @@ -5915,10 +5915,11 @@ static void unmap_ref_private(struct mm_ * cannot race with other handlers or page migration. * Keep the pte_same checks anyway to make transition from the mutex easier. */ -static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, - struct folio *pagecache_folio, +static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; pte_t pte = huge_ptep_get(vmf->pte); struct hstate *h = hstate_vma(vma); @@ -6364,7 +6365,7 @@ static vm_fault_t hugetlb_no_page(struct hugetlb_count_add(pages_per_huge_page(h), mm); if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ - ret = hugetlb_wp(mm, vma, folio, vmf); + ret = hugetlb_wp(folio, vmf); } spin_unlock(vmf->ptl); @@ -6577,7 +6578,7 @@ vm_fault_t hugetlb_fault(struct mm_struc if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { if (!huge_pte_write(vmf.orig_pte)) { - ret = hugetlb_wp(mm, vma, pagecache_folio, &vmf); + ret = hugetlb_wp(pagecache_folio, &vmf); goto out_put_page; } else if (likely(flags & FAULT_FLAG_WRITE)) { vmf.orig_pte = huge_pte_mkdirty(vmf.orig_pte); _ Patches currently in -mm which might be from vishal.moola@xxxxxxxxx are hugetlb-convert-hugetlb_fault-to-use-struct-vm_fault.patch hugetlb-convert-hugetlb_no_page-to-use-struct-vm_fault.patch hugetlb-convert-hugetlb_no_page-to-use-struct-vm_fault-fix.patch hugetlb-convert-hugetlb_wp-to-use-struct-vm_fault.patch hugetlb-convert-hugetlb_wp-to-use-struct-vm_fault-fix.patch