2019-08-27 08:27+0200, Vitaly Kuznetsov: > Tony Luck <tony.luck@xxxxxxxxx> writes: > > > When I boot my server I'm treated to a console log with: > > > > [ 40.520510] kvm: disabled by bios > > [ 40.551234] kvm: disabled by bios > > [ 40.607987] kvm: disabled by bios > > [ 40.659701] kvm: disabled by bios > > [ 40.691224] kvm: disabled by bios > > [ 40.718786] kvm: disabled by bios > > [ 40.750122] kvm: disabled by bios > > [ 40.797170] kvm: disabled by bios > > [ 40.828408] kvm: disabled by bios > > > > ... many, many more lines, one for every logical CPU > > (If I didn't miss anything) we have the following code: > > __init vmx_init() > kvm_init(); > kvm_arch_init() > > and we bail on first error so there should be only 1 message per module > load attempt. The question I have is who (and why) is trying to load > kvm-intel (or kvm-amd which is not any different) for each CPU? Is it > udev? Can this be changed? I agree that this is a highly suspicious behavior. It would be really helpful if we found out what is causing it. So far, this patch seems to be working around a userspace bug. > In particular, I'm worried about eVMCS enablement in vmx_init(), we will > also get a bunch of "KVM: vmx: using Hyper-V Enlightened VMCS" messages > if the consequent kvm_arch_init() fails. And we can't get rid of this through the printk_once trick, because this code lives in kvm_intel module and therefore gets unloaded on every failure. I am also not inclined to apply the patch as we will likely merge the kvm and kvm_{svm,intel} modules in the future to take full advantage of link time optimizations and this patch would stop working after that. Thanks.