On Wed, 13 Oct 2021 16:58:31 +0100, Quentin Perret <qperret@xxxxxxxxxx> wrote: > > Make use of the newly introduced unshare hypercall during guest teardown > to unmap guest-related data structures from the hyp stage-1. > > Signed-off-by: Quentin Perret <qperret@xxxxxxxxxx> > --- > arch/arm64/include/asm/kvm_host.h | 2 ++ > arch/arm64/include/asm/kvm_mmu.h | 1 + > arch/arm64/kvm/arm.c | 2 ++ > arch/arm64/kvm/fpsimd.c | 10 ++++++++-- > arch/arm64/kvm/mmu.c | 16 ++++++++++++++++ > arch/arm64/kvm/reset.c | 13 ++++++++++++- > 6 files changed, 41 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index f8be56d5342b..8b61cdcd1b29 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -322,6 +322,8 @@ struct kvm_vcpu_arch { > > struct thread_info *host_thread_info; /* hyp VA */ > struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ > + struct thread_info *kern_thread_info; > + struct user_fpsimd_state *kern_fpsimd_state; > > struct { > /* {Break,watch}point registers */ > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 185d0f62b724..81839e9a8a24 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -151,6 +151,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) > #include <asm/stage2_pgtable.h> > > int kvm_share_hyp(void *from, void *to); > +void kvm_unshare_hyp(void *from, void *to); > int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); > int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, > void __iomem **kaddr, > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index f2e74635332b..f11c51db6fe6 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -188,6 +188,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) > } > } > atomic_set(&kvm->online_vcpus, 0); > + > + kvm_unshare_hyp(kvm, kvm + 1); > } > > int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) > diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c > index 2fe1128d9f3d..67059daf4d26 100644 > --- a/arch/arm64/kvm/fpsimd.c > +++ b/arch/arm64/kvm/fpsimd.c > @@ -28,23 +28,29 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) > { > int ret; > > - struct thread_info *ti = ¤t->thread_info; > - struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; > + struct thread_info *ti = vcpu->arch.kern_thread_info; > + struct user_fpsimd_state *fpsimd = vcpu->arch.kern_fpsimd_state; > > /* > * Make sure the host task thread flags and fpsimd state are > * visible to hyp: > */ > + kvm_unshare_hyp(ti, ti + 1); At this stage, the old thread may have been destroyed and the memory recycled. What happens if, in the interval, that memory gets shared again in another context? My guts feeling is that either the sharing fails, or the unshare above breaks something else (the refcounting doesn't save us if the sharing is not with HYP). In any case, I wonder whether we need a notification from the core code that a thread for which we have a HYP mapping is gone so that we can mop up the mapping at that point. That's similar to what we have for MMU notifiers and S2 PTs. This is doable by hooking into fpsimd_release_task() and extending thread_struct to track the sharing with HYP. Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm