On Tue, Dec 11, 2018 at 02:28:14PM +0100, Vitaly Kuznetsov wrote: > Roman Kagan <rkagan@xxxxxxxxxxxxx> writes: > > > On Mon, Dec 10, 2018 at 06:21:56PM +0100, Vitaly Kuznetsov wrote: > > >> + > >> +Currently, the following list of CPUID leaves are returned: > >> + HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS > >> + HYPERV_CPUID_INTERFACE > >> + HYPERV_CPUID_VERSION > >> + HYPERV_CPUID_FEATURES > >> + HYPERV_CPUID_ENLIGHTMENT_INFO > >> + HYPERV_CPUID_IMPLEMENT_LIMITS > >> + HYPERV_CPUID_NESTED_FEATURES > >> + > >> +HYPERV_CPUID_NESTED_FEATURES leaf is only exposed when Enlightened VMCS was > >> +enabled on the corresponding vCPU (KVM_CAP_HYPERV_ENLIGHTENED_VMCS). > > > > IOW the output of ioctl(KVM_GET_SUPPORTED_HV_CPUID) depends on > > whether ioctl(KVM_ENABLE_CAP, KVM_CAP_HYPERV_ENLIGHTENED_VMCS) has > > already been called on that vcpu? I wonder if this fits the intended > > usage? > > I added HYPERV_CPUID_NESTED_FEATURES in the list (and made the new ioctl > per-cpu and not per-vm) for consistency. *In theory* > KVM_CAP_HYPERV_ENLIGHTENED_VMCS is also enabled per-vcpu so some > hypothetical userspace can later check enabled eVMCS versions (which can > differ across vCPUs!) with KVM_GET_SUPPORTED_HV_CPUID. We will also have > direct tlb flush and other nested features there so to avoid addning new > KVM_CAP_* for them we need the CPUID. This is different from how KVM_GET_SUPPORTED_CPUID is used: QEMU assumes that its output doesn't change between calls, and even caches the result calling the ioctl only once. > Another thing I'm thinking about is something like 'hv_all' cpu flag for > Qemu which would enable everything by setting guest CPUIDs to what > KVM_GET_SUPPORTED_HV_CPUID returns. In that case it would also be > convenient to have HYPERV_CPUID_NESTED_FEATURES properly filled (or not > filled when eVMCS was not enabled). I think this is orthogonal to the way you obtain capability info from the kernel. Roman.