On 13/05/2020, Roman Kagan wrote:
On Fri, Apr 24, 2020 at 02:37:41PM +0300, Jon Doron wrote:
Simlify the code to define a new cpuid leaf group by enabled feature.
This also fixes a bug in which the max cpuid leaf was always set to
HYPERV_CPUID_NESTED_FEATURES regardless if nesting is supported or not.
I'm not sure the bug is there. My understanding is that
HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS is supposed to provide the range
of leaves that return meaningful information.
HYPERV_CPUID_NESTED_FEATURES *can* return meaningful information
regardless of whether nested virt is active.
So I'd rather skip reducing the returned set if !evmcs_ver. The
returned set is sparse in .function anyway; anything that isn't there
will just return zeros to the guest.
Changing the cpuid is also guest-visible so care must be taken with
this.
To be honest from my understanding of the TLFS it states:
"The maximum input value for hypervisor CPUID information."
So we should not expose stuff we wont "answer" to, I agree you can
always issue CPUID to any leaf and you will get zeroes but if we try to
follow TLFS it sounds like this needs to be capped here.
Any new CPUID group needs to consider the max leaf and be added in the
correct order, in this method there are two rules:
1. Each cpuid leaf group must be order in an ascending order
2. The appending for the cpuid leafs by features also needs to happen by
ascending order.
It looks like unnecessary complication. I think all you need to do to
simplify adding new leaves is to add a macro to hyperv-tlfs.h, say,
HYPERV_CPUID_MAX_PRESENT_LEAF, define it to
HYPERV_CPUID_NESTED_FEATURES, and redefine once another leaf is added
(compat may need to be taken care of).
Thanks,
Roman.
I suggest you will see the discussion around v8 of this patchset where I
simply set HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS to be the maximum
value, but then we noticed this issue and hence why this patch was
revised to current form. (I agree it could be done under the TLFS header
file but as I understand from other emails from Michal it's going to get
re-worked a bit and splitted, still have not got into the details of
that work).
Thanks,
-- Jon.
Reviewed-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
Signed-off-by: Jon Doron <arilou@xxxxxxxxx>
---
arch/x86/kvm/hyperv.c | 46 ++++++++++++++++++++++++++++++-------------
1 file changed, 32 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index bcefa9d4e57e..ab3e9dbcabbe 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1785,27 +1785,45 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args)
return kvm_hv_eventfd_assign(kvm, args->conn_id, args->fd);
}
+/* Must be sorted in ascending order by function */
+static struct kvm_cpuid_entry2 core_cpuid_entries[] = {
+ { .function = HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS },
+ { .function = HYPERV_CPUID_INTERFACE },
+ { .function = HYPERV_CPUID_VERSION },
+ { .function = HYPERV_CPUID_FEATURES },
+ { .function = HYPERV_CPUID_ENLIGHTMENT_INFO },
+ { .function = HYPERV_CPUID_IMPLEMENT_LIMITS },
+};
+
+static struct kvm_cpuid_entry2 evmcs_cpuid_entries[] = {
+ { .function = HYPERV_CPUID_NESTED_FEATURES },
+};
+
+#define HV_MAX_CPUID_ENTRIES \
+ (ARRAY_SIZE(core_cpuid_entries) +\
+ ARRAY_SIZE(evmcs_cpuid_entries))
+
int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
struct kvm_cpuid_entry2 __user *entries)
{
uint16_t evmcs_ver = 0;
- struct kvm_cpuid_entry2 cpuid_entries[] = {
- { .function = HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS },
- { .function = HYPERV_CPUID_INTERFACE },
- { .function = HYPERV_CPUID_VERSION },
- { .function = HYPERV_CPUID_FEATURES },
- { .function = HYPERV_CPUID_ENLIGHTMENT_INFO },
- { .function = HYPERV_CPUID_IMPLEMENT_LIMITS },
- { .function = HYPERV_CPUID_NESTED_FEATURES },
- };
- int i, nent = ARRAY_SIZE(cpuid_entries);
+ struct kvm_cpuid_entry2 cpuid_entries[HV_MAX_CPUID_ENTRIES];
+ int i, nent = 0;
+
+ /* Set the core cpuid entries required for Hyper-V */
+ memcpy(&cpuid_entries[nent], &core_cpuid_entries,
+ sizeof(core_cpuid_entries));
+ nent = ARRAY_SIZE(core_cpuid_entries);
if (kvm_x86_ops.nested_get_evmcs_version)
evmcs_ver = kvm_x86_ops.nested_get_evmcs_version(vcpu);
- /* Skip NESTED_FEATURES if eVMCS is not supported */
- if (!evmcs_ver)
- --nent;
+ if (evmcs_ver) {
+ /* EVMCS is enabled, add the required EVMCS CPUID leafs */
+ memcpy(&cpuid_entries[nent], &evmcs_cpuid_entries,
+ sizeof(evmcs_cpuid_entries));
+ nent += ARRAY_SIZE(evmcs_cpuid_entries);
+ }
if (cpuid->nent < nent)
return -E2BIG;
@@ -1821,7 +1839,7 @@ int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
case HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS:
memcpy(signature, "Linux KVM Hv", 12);
- ent->eax = HYPERV_CPUID_NESTED_FEATURES;
+ ent->eax = cpuid_entries[nent - 1].function;
ent->ebx = signature[0];
ent->ecx = signature[1];
ent->edx = signature[2];
--
2.24.1