On Mon, Jan 11, 2021 at 09:54:17AM -0800, Sean Christopherson wrote: > Yes, but KVM including the bug caps in kvm_cpu_caps is extremely unlikely, and > arguably flat out wrong. Currently, kvm_cpu_caps includes only CPUID-based > features that can be exposed direcly to the guest. I could see a scenario where > KVM exposed "bug" capabilities to the guest via a paravirt interface, but I > would expect that KVM would either filter and expose the kernel's bug caps > without userspace input, or would add a KVM-defined paravirt CPUID leaf to > enumerate the caps and track _that_ in kvm_cpu_caps. > > Anyways, I agree that overlapping the bug caps it's a bit of unnecessary > cleverness. I'm not opposed to incorporating NBUGINTS into KVM, but that would > mean explicitly pulling in even more x86_capability implementation details. Also, the kernel and kvm being part of it :) kinda tries to fix those bugs and not expose them to the guest so exposing a bug would probably be only for testing purposes... > That part is deliberate and isn't a dependency so much as how things are > implemented. The true dependency is on the bit offsets within each word. Right. > The kernel could completely rescramble the word numbering and KVM > would chug along happily. What KVM won't play nice with is if the > kernel broke up a hardware- defined, gathered CPUID leaf/word into > scattered features spread out amongst multiple Linux-defined words. Yes, kvm wants the bits just as they are in the CPUID leafs from the hw. > It's mostly historical; before the kvm_cpu_caps concept was introduced, the code > had grown organically to include both boot_cpu_data and raw CPUID info. The > vast, vast majority of the time, doing CPUID is likely redundant. But, as noted > in commit d8577a4c238f ("KVM: x86: Do host CPUID at load time to mask KVM cpu > caps"), the code is quite cheap and runs once at KVM load. My argument back > then was, and still is, that an extra bit of paranoia is justified since the > code and operations are quite nearly free. Ok. > This particular dependency can be broken, and quite easily at that. Rather than > memcpy() boot_cpu_data.x86_capability, it's trivially easy to redefine the F() > macro to invoke boot_cpu_has(), which would allow dropping the memcpy(). The > big downside, and why I didn't post the code, is that doing so means every > feature routed through F() requires some form of BT+Jcc (or CMOVcc) sequence, > whereas the mempcy() approach allows the F() features to be encoded as a single > literal by the compiler. > > From a latency perspective, the extra code is negligible. The big issue is that > all those extra checks add 2k+ bytes of code. Eliminating the mempcy() doesn't > actually break KVM's dependency on the bit offsets, so we'd be bloating kvm.ko > by a noticeable amount without providing substantial value. > > And, this behavior is mostly opportunistic; the true justification/motiviation > for taking a dependency on the X86_FEATURE_* bit offsets is for communication > with userspace, querying the guest CPU model, and runtime checks. Ok, I guess we'll try to find a middle ground here and not let stuff grow too ugly to live. > It's effectively for communication with userspace. Userspace, via ioctl(), > dictates the vCPU model to KVM, including the exact CPUID results. And using the CPUID leafs with the exact bit positions is sort of an "interface" there, I see. > to properly > virtualize/emulate the defined vCPU model, KVM must query the dictated CPUID > results to determine what features are supported, what guest operations > should fault, etc... E.g. if the vCPU model, via CPUID, states that SMEP isn't > supported then KVM needs to inject a #GP if the guest attempts to set CR4.SMEP. > > KVM also uses the hardware-defined CPUID ABI to advertise which features are > supported by both hardware and KVM. This is the kvm_cpu_cap stuff, where KVM > reads boot_cpu_data to see what features were enabled by the kernel. Right. > It would be possible for KVM to break the dependency on X86_FEATURE_* bit > offsets by defining a translation layer, but I strongly feel that adding manual > translations will do more harm than good as it increases the odds of us botching > a translation or using the wrong feature flag, creates potential namespace > conflicts, etc... Ok, lemme see if we might encounter more issues down the road... +enum kvm_only_cpuid_leafs { + CPUID_12_EAX = NCAPINTS, + NR_KVM_CPU_CAPS, + + NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS, +}; + What happens when we decide to allocate a separate leaf for CPUID_12_EAX down the road? You do it already here Subject: [PATCH 04/13] x86/cpufeatures: Assign dedicated feature word for AMD mem encryption for the AMD leaf. I'm thinking this way around - from scattered to a hw one - should be ok because that should work easily. The other way around, taking a hw leaf and scattering it around x86_capability[] array elems would probably be nasty but with your change that should work too. Yah, I'm just hypothesizing here - I don't think this "other way around" will ever happen... Hmm, yap, I can cautiously say that with your change we should be ok... Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette