Re: [RFC] KVM: x86: Support KVM VMs sharing SEV context

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 12, 2021, Nathan Tempelman wrote:
> On Wed, Feb 24, 2021 at 9:37 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > @@ -1282,6 +1299,65 @@ int svm_unregister_enc_region(struct kvm *kvm,
> > >       return ret;
> > >  }
> > >
> > > +int svm_vm_copy_asid_to(struct kvm *kvm, unsigned int mirror_kvm_fd)
> > > +{
> > > +     struct file *mirror_kvm_file;
> > > +     struct kvm *mirror_kvm;
> > > +     struct kvm_sev_info *mirror_kvm_sev;
> >
> > What about using src and dst, e.g. src_kvm, dest_kvm_fd, dest_kvm, etc...?  For
> > my brain, the mirror terminology adds an extra layer of translation.
> 
> I like source, but I think I'll keep mirror. I think it captures the current
> state of it better--this isn't it's own full featured sev vm, in a sense it's
> a reflection of the source.

The two things I dislike about mirror is that (for me) it's not clear whether
"mirror" is the source or the dest, and "mirror" implies that there is ongoing
synchronization.

> > > +
> > > +     /*
> > > +      * The mirror_kvm holds an enc_context_owner ref so its asid can't
> > > +      * disappear until we're done with it
> > > +      */
> > > +     kvm_get_kvm(kvm);
> >
> > Do we really need/want to take a reference to the source 'struct kvm'?  IMO,
> > the so called mirror should never be doing operations with its source context,
> > i.e. should not have easy access to 'struct kvm'.  We already have a reference
> > to the fd, any reason not to use that to ensure liveliness of the source?
> 
> I agree the mirror should never be running operations on the source. I don't
> know that holding the fd instead of the kvm makes that much better though,
> are there advantages to that I'm not seeing?

If there's no kvm pointer, it's much more difficult for someone to do the wrong
thing, and any such shenanigans stick out like a sore thumb in patches, which
makes reviewing future changes easier.

> > > +     mutex_unlock(&kvm->lock);
> > > +     mutex_lock(&mirror_kvm->lock);
> > > +
> > > +     /* Set enc_context_owner and copy its encryption context over */
> > > +     mirror_kvm_sev = &to_kvm_svm(mirror_kvm)->sev_info;
> > > +     mirror_kvm_sev->enc_context_owner = kvm;
> > > +     mirror_kvm_sev->asid = asid;
> > > +     mirror_kvm_sev->active = true;
> >
> > I would prefer a prep patch to move "INIT_LIST_HEAD(&sev->regions_list);" from
> > sev_guest_init() to when the VM is instantiated.  Shaving a few cycles in that
> > flow is meaningless, and not initializing the list of regions is odd, and will
> > cause problems if mirrors are allowed to pin memory (or do PSP commands).
> 
> It seems like we can keep this a lot simpler and easier to reason about by not
> allowing mirrors to pin memory or do psp commands. That was the intent. We
> don't gain anything but complexity by allowing this to be a fully featured SEV
> VM. Unless anyone can think of a good reason we'd want to have a mirror
> vm be able to do more than this?

I suspect the migration helper will need to pin memory independent of the real
VM.

But, for me, that's largely orthogonal to initializing regions_list.  Leaving a
list uninitialized for no good reason is an unnecessary risk, as any related
bugs are all but guaranteed to crash the host.

> > > @@ -5321,6 +5321,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> > >                       kvm->arch.bus_lock_detection_enabled = true;
> > >               r = 0;
> > >               break;
> > > +     case KVM_CAP_VM_COPY_ENC_CONTEXT_TO:
> > > +             r = -ENOTTY;
> > > +             if (kvm_x86_ops.vm_copy_enc_context_to)
> > > +                     r = kvm_x86_ops.vm_copy_enc_context_to(kvm, cap->args[0]);
> >
> > This can be a static call.
> >
> > On a related topic, does this really need to be a separate ioctl()?  TDX can't
> > share encryption contexts, everything that KVM can do for a TDX guest requires
> > the per-VM context.  Unless there is a known non-x86 use case, it might be
> > better to make this a mem_enc_op, and then it can be named SEV_SHARE_ASID or
> > something.
> 
> I'd prefer to leave this as a capability in the same way the
> register_enc_region calls work. Moving it into mem_enc_ops means we'll have
> to do some messy locking to avoid race conditions with the second vm since
> kvm gets locked in enc_ops.

Eh, it's not that bad.

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 83e00e524513..0cb8a5022580 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1124,6 +1124,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
        if (copy_from_user(&sev_cmd, argp, sizeof(struct kvm_sev_cmd)))
                return -EFAULT;

+       if (sev_cmd.id == SEV_SHARE_ASID)
+               return sev_shared_asid(kvm, &sev_cmd);
+
        mutex_lock(&kvm->lock);

        switch (sev_cmd.id) {

> Also seems wierd to me having this hack grouped in with all the PSP commands.
> If i'm the only one that thinks this is cleaner, I'll move it though.

Heh, IMO, that ship already sailed.  KVM_MEMORY_ENCRYPT_OP is quite the misnomer
given that most of the commands do way more than fiddle with memory encryption.
At least with this one, the ASID is directly tied to hardware's encryption of
memory.

> Interesting about the platform, too. If you're sure we'll never need to build
> this for any other platform I'll at least rename it to be amd specific.
> There's no non-sev scenario anyone can think of that might want to do this?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux