6.6-stable review patch. If anyone has any objections, please let me know. ------------------ From: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> [ Upstream commit a179a4bfb694f80f2709a1d0398469e787acb974 ] When NAPOT is enabled, a new hugepage size is available and then we need to make hugetlb_mask_last_page() aware of that. Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page") Signed-off-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> Link: https://lore.kernel.org/r/20240117195741.1926459-3-alexghiti@xxxxxxxxxxxx Signed-off-by: Palmer Dabbelt <palmer@xxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- arch/riscv/mm/hugetlbpage.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index 24c0179565d8..87af75ee7186 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -125,6 +125,26 @@ pte_t *huge_pte_offset(struct mm_struct *mm, return pte; } +unsigned long hugetlb_mask_last_page(struct hstate *h) +{ + unsigned long hp_size = huge_page_size(h); + + switch (hp_size) { +#ifndef __PAGETABLE_PMD_FOLDED + case PUD_SIZE: + return P4D_SIZE - PUD_SIZE; +#endif + case PMD_SIZE: + return PUD_SIZE - PMD_SIZE; + case napot_cont_size(NAPOT_CONT64KB_ORDER): + return PMD_SIZE - napot_cont_size(NAPOT_CONT64KB_ORDER); + default: + break; + } + + return 0UL; +} + static pte_t get_clear_contig(struct mm_struct *mm, unsigned long addr, pte_t *ptep, -- 2.43.0