On Tue, Aug 20, 2019 at 12:51:28AM -0700, Song Liu wrote: > pti_clone_pgtable() increases addr by PUD_SIZE for pud_none(*pud) case. > This is not accurate because addr may not be PUD_SIZE aligned. > > In our x86_64 kernel, pti_clone_pgtable() fails to clone 7 PMDs because > of this issuse, including PMD for the irq entry table. For a memcache > like workload, this introduces about 4.5x more iTLB-load and about 2.5x > more iTLB-load-misses on a Skylake CPU. > > This patch fixes this issue by adding PMD_SIZE to addr for pud_none() > case. > diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c > index b196524759ec..5a67c3015f59 100644 > --- a/arch/x86/mm/pti.c > +++ b/arch/x86/mm/pti.c > @@ -330,7 +330,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end, > > pud = pud_offset(p4d, addr); > if (pud_none(*pud)) { > - addr += PUD_SIZE; > + addr += PMD_SIZE; > continue; > } I'm thinking you're right in that there's a bug here, but I'm also thinking your patch is both incomplete and broken. What that code wants to do is skip to the end of the pud, a pmd_size increase will not do that. And right below this, there's a second instance of this exact pattern. Did I get the below right? --- diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c index b196524759ec..32b20b3cb227 100644 --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -330,12 +330,14 @@ pti_clone_pgtable(unsigned long start, unsigned long end, pud = pud_offset(p4d, addr); if (pud_none(*pud)) { + addr &= PUD_MASK; addr += PUD_SIZE; continue; } pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) { + addr &= PMD_MASK; addr += PMD_SIZE; continue; }