On Fri, Apr 29, 2022, Borislav Petkov wrote: > On Fri, Apr 29, 2022 at 05:31:16PM +0000, Jon Kohler wrote: > > Selftests IIUC, but there may be other VMMs that do funny stuff. Said > > another way, I don’t think we actively restrict user space from doing > > this as far as I know. > > "selftests", "there may be"?! > > This doesn't sound like a real-life use case to me and we don't do > changes just because. Sorry. > > > The paranoid aspect here is KVM is issuing an *additional* IBPB on > > top of what already happens in switch_mm(). > > Yeah, I know how that works. > > > IMHO KVM side IBPB for most use cases isn’t really necessarily but > > the general concept is that you want to protect vCPU from guest A > > from guest B, so you issue a prediction barrier on vCPU switch. > > > > *however* that protection already happens in switch_mm(), because > > guest A and B are likely to use different mm_struct, so the only point > > of having this support in KVM seems to be to “kill it with fire” for > > paranoid users who might be doing some tomfoolery that would > > somehow bypass switch_mm() protection (such as somehow > > sharing a struct). > > Yeah, no, this all sounds like something highly hypothetical or there's > a use case of which you don't want to talk about publicly. What Jon is trying to do is eliminate IBPB that already exists in KVM. The catch is that, in theory, someone not-Jon could be running multiple VMs in a single address space, e.g. VM-based containers. So if we simply delete the IBPB, then we could theoretically and silently break a user. That's why there's a bunch of hand-waving. > Either way, from what I'm reading I'm not in the least convinced that > this is needed. Can you clarify what "this" is? Does "this" mean "this patch", or does it mean "this IBPB when switching vCPUs"? Because if it means the latter, then I think you're in violent agreement; the IBPB when switching vCPUs is pointless and unnecessary.