On Tue, Mar 22, 2022 at 11:30:07AM -0700, David Matlack wrote: > > > +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access) > > > +{ > > > + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); > > > + union kvm_mmu_page_role role; > > > + > > > + role = parent_sp->role; > > > + role.level--; > > > + role.access = access; > > > + role.direct = direct; > > > + > > > + /* > > > + * If the guest has 4-byte PTEs then that means it's using 32-bit, > > > + * 2-level, non-PAE paging. KVM shadows such guests using 4 PAE page > > > + * directories, each mapping 1/4 of the guest's linear address space > > > + * (1GiB). The shadow pages for those 4 page directories are > > > + * pre-allocated and assigned a separate quadrant in their role. > > > + * > > > + * Since we are allocating a child shadow page and there are only 2 > > > + * levels, this must be a PG_LEVEL_4K shadow page. Here the quadrant > > > + * will either be 0 or 1 because it maps 1/2 of the address space mapped > > > + * by the guest's PG_LEVEL_4K page table (or 4MiB huge page) that it > > > + * is shadowing. In this case, the quadrant can be derived by the index > > > + * of the SPTE that points to the new child shadow page in the page > > > + * directory (parent_sp). Specifically, every 2 SPTEs in parent_sp > > > + * shadow one half of a guest's page table (or 4MiB huge page) so the > > > + * quadrant is just the parity of the index of the SPTE. > > > + */ > > > + if (role.has_4_byte_gpte) { > > > + BUG_ON(role.level != PG_LEVEL_4K); > > > + role.quadrant = (sptep - parent_sp->spt) % 2; > > > + } > > > > This made me wonder whether role.quadrant can be dropped, because it seems > > it can be calculated out of the box with has_4_byte_gpte, level and spte > > offset. I could have missed something, though.. > > I think you're right that we could compute it on-the-fly. But it'd be > non-trivial to remove since it's currently used to ensure the sp->role > and sp->gfn uniquely identifies each shadow page (e.g. when checking > for collisions in the mmu_page_hash). Makes sense. -- Peter Xu _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm