On 6/15/22 01:33, Sean Christopherson wrote:
@@ -2027,8 +2013,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, role.direct = direct; role.access = access; if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; + quadrant = gaddr >> (PAGE_SHIFT + (SPTE_LEVEL_BITS * level)); + quadrant &= (1 << ((PT32_LEVEL_BITS - SPTE_LEVEL_BITS) * level)) - 1; role.quadrant = quadrant;
That's just a fancy 1, though, and this is just /* * Isolate the bits of the address that have already been used by the * 8-byte shadow page table structures, but not yet in the 4-byte guest * page tables. For example, a 4-byte PDE consumes bits 31:22 and an * 8-byte PDE consumes bits 29:21, so bits 31:30 go in the hash * key. The hash table look up up ensures that each sPTE points to * the page for the correct portion of the guest page table structure. */ quadrant = gaddr >> (PAGE_SHIFT + (SPTE_LEVEL_BITS * level)); quadrant &= (1 << level) - 1; (Not the best comment, understood). Paolo