On Thu, Feb 03, 2022, David Matlack wrote: > Instead of computing the shadow page role from scratch for every new > page, we can derive most of the information from the parent shadow page. > This avoids redundant calculations such as the quadrant, and reduces the Uh, calculating quadrant isn't redundant. The quadrant forces KVM to use different (multiple) shadow pages to shadow a single guest PTE when the guest is using 32-bit paging (1024 PTEs per page table vs. 512 PTEs per page table). The reason quadrant is "quad" and not more or less is because 32-bit paging has two levels. First-level PTEs can have quadrant=0/1, and that gets doubled for second-level PTEs because we need to use four PTEs (two to handle 2x guest PTEs, and each of those needs to be unique for the first-level PTEs they point at). Indeed, this fails spectacularly when attempting to boot a 32-bit non-PAE kernel with shadow paging enabled. \��� ���\��� ��\��� P��\��` BUG: unable to handle page fault for address: ff9fa81c #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page *pde = 00000000 ���� Oops: 0000 [#1]��<���� SMP��<������<������<���� ��<����CPU: 0 PID: 0 Comm: swapper ��<����G W 5.12.0 #10 ��<����EIP: memblock_add_range.isra.18.constprop.23d�r ��<����Code: <83> 79 04 00 75 2c 83 38 01 75 06 83 78 08 00 74 02 0f 0b 89 11 8b ��<����EAX: c2af24bc EBX: fdffffff ECX: ff9fa818 EDX: 02000000 ��<����ESI: 02000000 EDI: 00000000 EBP: c2909f30 ESP: c2909f0c ��<����DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00210006 ��<����CR0: 80050033 CR2: ff9fa81c CR3: 02b76000 CR4: 00040600 ��<����Call Trace: ��<���� ? printkd�r ��<���� ��<����memblock_reserved�r ��<���� ? 0xc2000000 ��<���� ��<����setup_archd�r ��<���� ? vprintk_defaultd�r ��<���� ? vprintkd�r ��<���� ��<����start_kerneld�r ��<���� ��<����i386_start_kerneld�r ��<���� ��<����startup_32_smpd�r ���� CR2: 00000000ff9fa81c ��<����EIP: memblock_add_range.isra.18.constprop.23d�r ��<����Code: <83> 79 04 00 75 2c 83 38 01 75 06 83 78 08 00 74 02 0f 0b 89 11 8b ��<����EAX: c2af24bc EBX: fdffffff ECX: ff9fa818 EDX: 02000000 ��<����ESI: 02000000 EDI: 00000000 EBP: c2909f30 ESP: c2909f0c ��<����DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00210006 ��<����CR0: 80050033 CR2: ff9fa81c CR3: 02b76000 CR4: 00040600 > number of parameters to kvm_mmu_get_page(). > > Preemptivel split out the role calculation to a separate function for Preemptively. > use in a following commit. > > No functional change intended. > > Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> > ---