On Tue, Sep 6, 2022 at 3:25 AM Jarkko Sakkinen <jarkko@xxxxxxxxxx> wrote: > > On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote: > > On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote: > > > + pfn = pte_pfn(*pte); > > > + > > > + /* If its large page then calculte the fault pfn */ > > > + if (level > PG_LEVEL_4K) { > > > + unsigned long mask; > > > + > > > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1); > > > + pfn |= (address >> PAGE_SHIFT) & mask; > > > > Oh boy, this is unnecessarily complicated. Isn't this > > > > pfn |= pud_index(address); > > > > or > > pfn |= pmd_index(address); > > I played with this a bit and ended up with > > pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level - 1)); > > Unless I got something terribly wrong, this should do the > same (see the attached patch) as the existing calculations. Actually, I don't think they're the same. I think Jarkko's version is correct. Specifically: - For level = PG_LEVEL_2M they're the same. - For level = PG_LEVEL_1G: The current code calculates a garbage mask: mask = pages_per_hpage(level) - pages_per_hpage(level - 1); translates to: >>> hex(262144 - 512) '0x3fe00' But I believe Jarkko's version calculates the correct mask (below), incorporating all 18 offset bits into the 1G page. >>> hex(262144 -1) '0x3ffff'