On Tue, Sep 06, 2022 at 06:44:23PM +0300, Jarkko Sakkinen wrote: > On Tue, Sep 06, 2022 at 02:17:15PM +0000, Kalra, Ashish wrote: > > [AMD Official Use Only - General] > > > > >> On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote: > > >> > On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote: > > >> > > + pfn = pte_pfn(*pte); > > >> > > + > > >> > > + /* If its large page then calculte the fault pfn */ > > >> > > + if (level > PG_LEVEL_4K) { > > >> > > + unsigned long mask; > > >> > > + > > >> > > + mask = pages_per_hpage(level) - pages_per_hpage(level - 1); > > >> > > + pfn |= (address >> PAGE_SHIFT) & mask; > > >> > > > >> > Oh boy, this is unnecessarily complicated. Isn't this > > >> > > > >> > pfn |= pud_index(address); > > >> > > > >> > or > > >> > pfn |= pmd_index(address); > > >> > > >> I played with this a bit and ended up with > > >> > > >> pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level > > >> - 1)); > > >> > > >> Unless I got something terribly wrong, this should do the same (see > > >> the attached patch) as the existing calculations. > > > > >Actually, I don't think they're the same. I think Jarkko's version is correct. Specifically: > > >- For level = PG_LEVEL_2M they're the same. > > >- For level = PG_LEVEL_1G: > > >The current code calculates a garbage mask: > > >mask = pages_per_hpage(level) - pages_per_hpage(level - 1); translates to: > > >>> hex(262144 - 512) > > >'0x3fe00' > > > > No actually this is not a garbage mask, as I explained in earlier responses we need to capture the address bits > > to get to the correct 4K index into the RMP table. > > Therefore, for level = PG_LEVEL_1G: > > mask = pages_per_hpage(level) - pages_per_hpage(level - 1) => 0x3fe00 (which is the correct mask). > > > > >But I believe Jarkko's version calculates the correct mask (below), incorporating all 18 offset bits into the 1G page. > > >>> hex(262144 -1) > > >'0x3ffff' > > > > We can get this simply by doing (page_per_hpage(level)-1), but as I mentioned above this is not what we need. > > I think you're correct, so I'll retry: > > (address / PAGE_SIZE) & (pages_per_hpage(level) - pages_per_hpage(level - 1)) = > > (address / PAGE_SIZE) & ((page_level_size(level) / PAGE_SIZE) - (page_level_size(level - 1) / PAGE_SIZE)) = > > [ factor out 1 / PAGE_SIZE ] > > (address & (page_level_size(level) - page_level_size(level - 1))) / PAGE_SIZE = > > [ Substitute with PFN_DOWN() ] > > PFN_DOWN(address & (page_level_size(level) - page_level_size(level - 1))) > > So you can just: > > pfn = pte_pfn(*pte) | PFN_DOWN(address & (page_level_size(level) - page_level_size(level - 1))); > > Which is IMHO way better still what it is now because no branching > and no ad-hoc helpers (the current is essentially just page_level_size > wrapper). I created a small test program: $ cat test.c #include <stdio.h> int main(void) { unsigned long arr[] = {0x8, 0x1000, 0x200000, 0x40000000, 0x8000000000}; int i; for (i = 1; i < sizeof(arr)/sizeof(unsigned long); i++) { printf("%048b\n", arr[i] - arr[i - 1]); printf("%048b\n", (arr[i] - 1) ^ (arr[i - 1] - 1)); } } kultaheltta in linux on host-snp-v7 [?] $ gcc -o test test.c kultaheltta in linux on host-snp-v7 [?] $ ./test 000000000000000000000000000000000000111111111000 000000000000000000000000000000000000111111111000 000000000000000000000000000111111111000000000000 000000000000000000000000000111111111000000000000 000000000000000000111111111000000000000000000000 000000000000000000111111111000000000000000000000 000000000000000011000000000000000000000000000000 000000000000000011000000000000000000000000000000 So the operation could be described as: pfn = PFN_DOWN(address & (~page_level_mask(level) ^ ~page_level_mask(level - 1))); Which IMHO already documents itself quite well: index with the granularity of PGD by removing bits used for PGD's below it. BR, Jarkko