On Sat, Apr 30, 2022 at 02:50:35PM +0000, Jon Kohler wrote: > This is 100% a fair ask, I appreciate the diligence, as we’ve all been there > on the ‘other side’ of changes to complex areas and spend hours digging on > git history, LKML threads, SDM/APM, and other sources trying to derive > why the heck something is the way it is. Yap, that's basically proving my point and why I want stuff to be properly documented so that the question "why was it done this way" can always be answered satisfactorily. > AFAIK, the KVM IBPB is avoided when switching in between vCPUs > belonging to the same vmcs/vmcb (i.e. the same guest), e.g. you could > have one VM highly oversubscribed to the host and you wouldn’t see > either the KVM IBPB or the switch_mm IBPB. All good. > > Reference vmx_vcpu_load_vmcs() and svm_vcpu_load() and the > conditionals prior to the barrier. So this is where something's still missing. > However, the pain ramps up when you have a bunch of separate guests, > especially with a small amount of vCPUs per guest, so the switching is more > likely to be in between completely separate guests. If the guests are completely separate, then it should fall into the switch_mm() case. Unless it has something to do with, as I looked at the SVM side of things, the VMCBs: if (sd->current_vmcb != svm->vmcb) { So it is not only different guests but also within the same guest and when the VMCB of the vCPU is not the current one. But then if VMCB of the vCPU is not the current, per-CPU VMCB, then that CPU ran another guest so in order for that other guest to attack the current guest, then its branch pred should be flushed. But I'm likely missing a virt aspect here so I'd let Sean explain what the rules are... Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette