On Fri, Mar 11, 2022 at 12:25:09AM +0000, David Matlack wrote: > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 519910938478..e866e05c4ba5 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -1716,16 +1716,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, > sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); > if (!direct) > sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); > + > set_page_private(virt_to_page(sp->spt), (unsigned long)sp); Trivial nit: I read Ben's comment in previous version and that sounds reasonable to keep the two linkages together. It's just a bit of a pity we need to set the private manually for each allocation. Meanwhile we have another counter example in the tdp mmu code (tdp_mmu_init_sp()), so we may want to align the tdp/shadow cases at some point.. -- Peter Xu _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm