For large mappings, the pgtable PAT is set on bit 12 (_PAGE_PAT_LARGE) rather than bit 9 (_PAGE_PAT), while bit 9 is used as PAE hint. Do proper shifting when inject large pfn pgtable mappings to make cache mode alright. Cc: Alex Williamson <alex.williamson@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Cc: x86@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 317de2afd371..c4a2356b1a54 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1135,7 +1135,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, goto out_unlock; } - entry = pmd_mkhuge(pfn_t_pmd(pfn, prot)); + entry = pmd_mkhuge(pfn_t_pmd(pfn, pgprot_4k_2_large(prot))); if (pfn_t_devmap(pfn)) entry = pmd_mkdevmap(entry); if (write) { @@ -1233,7 +1233,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, goto out_unlock; } - entry = pud_mkhuge(pfn_t_pud(pfn, prot)); + entry = pud_mkhuge(pfn_t_pud(pfn, pgprot_4k_2_large(prot))); if (pfn_t_devmap(pfn)) entry = pud_mkdevmap(entry); if (write) { -- 2.45.0