Le 22/09/2023 à 09:33, Ryan Roberts a écrit : > On 22/09/2023 07:56, Christophe Leroy wrote: >> >> >> Le 21/09/2023 à 18:20, Ryan Roberts a écrit : >>> In order to fix a bug, arm64 needs access to the vma inside it's >>> implementation of set_huge_pte_at(). Provide for this by converting the >>> mm parameter to be a vma. Any implementations that require the mm can >>> access it via vma->vm_mm. >>> >>> This commit makes the required powerpc modifications. Separate commits >>> update the other arches and core code, before the actual bug is fixed in >>> arm64. >>> >>> No behavioral changes intended. >>> >>> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> >>> --- >>> arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 3 ++- >>> arch/powerpc/mm/book3s64/hugetlbpage.c | 2 +- >>> arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 2 +- >>> arch/powerpc/mm/nohash/8xx.c | 2 +- >>> arch/powerpc/mm/pgtable.c | 7 ++++++- >>> 5 files changed, 11 insertions(+), 5 deletions(-) >>> >>> diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h >>> index de092b04ee1a..fff8cd726bc7 100644 >>> --- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h >>> +++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h >>> @@ -46,7 +46,8 @@ static inline int check_and_get_huge_psize(int shift) >>> } >>> >>> #define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT >>> -void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte); >>> +void set_huge_pte_at(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte); >>> +void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte); >> >> Don't add the burden of an additional function, see below >> >>> >>> #define __HAVE_ARCH_HUGE_PTE_CLEAR >>> static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, >>> diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c >>> index 3bc0eb21b2a0..ae7fd7c90eb8 100644 >>> --- a/arch/powerpc/mm/book3s64/hugetlbpage.c >>> +++ b/arch/powerpc/mm/book3s64/hugetlbpage.c >>> @@ -147,7 +147,7 @@ void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr >>> if (radix_enabled()) >>> return radix__huge_ptep_modify_prot_commit(vma, addr, ptep, >>> old_pte, pte); >>> - set_huge_pte_at(vma->vm_mm, addr, ptep, pte); >>> + set_huge_pte_at(vma, addr, ptep, pte); >>> } >>> >>> void __init hugetlbpage_init_defaultsize(void) >>> diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c >>> index 17075c78d4bc..7cd40a334c3a 100644 >>> --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c >>> +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c >>> @@ -58,5 +58,5 @@ void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, >>> atomic_read(&mm->context.copros) > 0) >>> radix__flush_hugetlb_page(vma, addr); >>> >>> - set_huge_pte_at(vma->vm_mm, addr, ptep, pte); >>> + set_huge_pte_at(vma, addr, ptep, pte); >>> } >>> diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c >>> index dbbfe897455d..650a7a8496b6 100644 >>> --- a/arch/powerpc/mm/nohash/8xx.c >>> +++ b/arch/powerpc/mm/nohash/8xx.c >>> @@ -91,7 +91,7 @@ static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa, >>> if (new && WARN_ON(pte_present(*ptep) && pgprot_val(prot))) >>> return -EINVAL; >>> >>> - set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot))); >>> + __set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot))); >> >> Call set_huge_pte_at() with a NULL vma instead. > > I'm happy to take your proposed approach if that's your preference. Another > option is to use a dummy VMA, as I have done in the core code, for the one call > site that calls set_huge_pte_at() with init_mm: > > struct vm_area_struct vma = TLB_FLUSH_VMA(&init_mm, 0); > > This is an existing macro that creates a dummy vma with vma->vm_mm filled in. > Then I pass &vma to the function. I don't like that, I prefer the solution I proposed. We already have a couple places where powerpc do things based on whether vma is NULL or not. > > Or yet another option would be to keep the mm param as is in set_huge_pte_at(), > and add a size param to the function. But then all call sites have the burden of > figuring out the size of the huge pte (although I think most know already). Indeed. arch_make_huge_pte() used to take a vma until commit 79c1c594f49a ("mm/hugetlb: change parameters of arch_make_huge_pte()"). Should we try and have the same approach ? Or is it irrelevant ? Christophe > > Thanks, > Ryan > >> >>> >>> return 0; >>> } >>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >>> index 3f86fd217690..9cbcb561a4d8 100644 >>> --- a/arch/powerpc/mm/pgtable.c >>> +++ b/arch/powerpc/mm/pgtable.c >>> @@ -288,7 +288,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, >>> } >>> >>> #if defined(CONFIG_PPC_8xx) >>> -void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) >>> +void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) >> >> Keep it as set_huge_pte_at() with vma argument. >> >>> { >>> pmd_t *pmd = pmd_off(mm, addr); >> >> Change to: >> >> pmd_t *pmd = vma ? pmd_off(vma->vm_mm, addr) : pmd_off_k(addr); >> >>> pte_basic_t val; >>> @@ -310,6 +310,11 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_ >>> for (i = 0; i < num; i++, entry++, val += SZ_4K) >>> *entry = val; >>> } >>> + >>> +void set_huge_pte_at(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte) >>> +{ >>> + __set_huge_pte_at(vma->vm_mm, addr, ptep, pte); >>> +} >> >> Remove this burden. >> >>> #endif >>> #endif /* CONFIG_HUGETLB_PAGE */ >>> >> >> >> Christophe >