Re: [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 21, 2025, Nicolas Saenz Julienne wrote:
> Hi Sean,
> 
> On Fri Jan 17, 2025 at 9:48 PM UTC, Sean Christopherson wrote:
> > On Wed, Oct 23, 2024, Paolo Bonzini wrote:
> >> @@ -6398,6 +6415,46 @@ the capability to be present.
> >>  `flags` must currently be zero.
> >>
> >>
> >> +.. _KVM_CREATE_PLANE:
> >> +
> >> +4.144 KVM_CREATE_PLANE
> >> +----------------------
> >> +
> >> +:Capability: KVM_CAP_PLANE
> >> +:Architectures: none
> >> +:Type: vm ioctl
> >> +:Parameters: plane id
> >> +:Returns: a VM fd that can be used to control the new plane.
> >> +
> >> +Creates a new *plane*, i.e. a separate privilege level for the
> >> +virtual machine.  Each plane has its own memory attributes,
> >> +which can be used to enable more restricted permissions than
> >> +what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
> >> +
> >> +Each plane has a numeric id that is used when communicating
> >> +with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
> >> +KVM is currently agnostic to whether low ids are more or less
> >> +privileged, it is expected that this will not always be the
> >> +case in the future.  For example KVM in the future may use
> >> +the plane id when planes are supported by hardware (as is the
> >> +case for VMPLs in AMD), or if KVM supports accelerated plane
> >> +switch operations (as might be the case for Hyper-V VTLs).
> >> +
> >> +4.145 KVM_CREATE_VCPU_PLANE
> >> +---------------------------
> >> +
> >> +:Capability: KVM_CAP_PLANE
> >> +:Architectures: none
> >> +:Type: vm ioctl (non default plane)
> >> +:Parameters: vcpu file descriptor for the default plane
> >> +:Returns: a vCPU fd that can be used to control the new plane
> >> +          for the vCPU.
> >> +
> >> +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
> >> +file descriptor that is passed in the argument.  Note that
> >> + because of how the API is defined, planes other than plane 0
> >> +can only have a subset of the ids that are available in plane 0.
> >
> > Hmm, was there a reason why we decided to add KVM_CREATE_VCPU_PLANE, as opposed
> > to having KVM_CREATE_PLANE create vCPUs?  IIRC, we talked about being able to
> > provide the new FD, but that would be easy enough to handle in KVM_CREATE_PLANE,
> > e.g. with an array of fds.
> 
> IIRC we mentioned that there is nothing in the VSM spec preventing
> higher VTLs from enabling a subset of vCPUs. That said, even the TLFS
> mentions that doing so is not such a great idea (15.4 VTL Enablement):
> 
> "Enable the target VTL on one or more virtual processors. [...] It is
>  recommended that all VPs have the same enabled VTLs. Having a VTL
>  enabled on some VPs (but not all) can lead to unexpected behavior."
> 
> One thing I've been meaning to research is moving device emulation into
> guest execution context by using VTLs. In that context, it might make
> sense to only enable VTLs on specific vCPUs. But I'm only speculating.

Creating vCPUs for a VTL in KVM doesn't need to _enable_ that VTL, and AIUI
shouldn't enable the VTL, because HvCallEnablePartitionVtl "only" enables the VTL
for the VM, HvCallEnableVpVtl is what fully enables the VTL for a given vCPU.

What I am proposing is to create the KVM vCPU object(s) at KVM_CREATE_PLANE,
purely to help avoid NULL pointer dereferences.  Actually, since KVM will likely
need uAPI to let userspace enable a VTL for a vCPU even if the vCPU object is
auto-created, we could have KVM auto-create the objects transparently, i.e. still
provide KVM_CREATE_VCPU_PLANE, but under the hood it would simply enable a flag
and install the vCPU's file descriptor.

> Otherwise, I cannot think of real world scenarios where this property is
> needed.
> 
> > k.g. is the expectation that userspace will create all planes before creating
> > any vCPUs?
> 
> The opposite really, VTLs can be initiated anytime during runtime.

Oh, right.

> > My concern with relying on userspace to create vCPUs is that it will mean KVM
> > will need to support, or at least not blow up on, VMs with multiple planes, but
> > only a subset of vCPUs at planes > 0.  Given the snafus with vcpus_array, it's
> > not at all hard to imagine scenarios where KVM tries to access a NULL vCPU in
> > a different plane.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux