On arch-lbr capable platforms, cpuid(0x1c,0) returns meaningful arch-lbr supported values, in this case, eax[7:0] = lbr depth mask. Whereas on legacy platforms(non-arch-lbr), cpuid(0x1c,0) returns with eax/ebx/ecx/edx zeroed out. On legacy platforms, during selftests app startup, it first gets supported cpuids by KVM_GET_SUPPORTED_CPUID then sets the returned data with KVM_SET_CPUID2, this leads to empty cpuid leaf(0x1c,0) written to KVM and makes the check fail, app finally ends up with below error message when run selftest: KVM_SET_CPUID2 failed, rc: -1 errno: 22 So check the validity of the leaf(0x1c,0) before validate lbr depth value. QEMU filters out empty CPUID leaves before calls KVM_SET_CPUID2, so this is not a problem. Fixes: 4b73207592: ("KVM: x86/cpuid: Advertise Arch LBR feature in CPUID") Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx> --- arch/x86/kvm/cpuid.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 9c107b5cc88f..c2eab1a73aab 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -103,13 +103,14 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu, return -EINVAL; } best = cpuid_entry2_find(entries, nent, 0x1c, 0); - if (best) { + if (best && best->eax) { unsigned int eax, ebx, ecx, edx; /* Reject user-space CPUID if depth is different from host's.*/ cpuid_count(0x1c, 0, &eax, &ebx, &ecx, &edx); - if ((best->eax & 0xff) != BIT(fls(eax & 0xff) - 1)) + if ((eax & 0xff) && + (best->eax & 0xff) != BIT(fls(eax & 0xff) - 1)) return -EINVAL; } -- 2.27.0