Re: [PATCH v4] x86/speculation, KVM: remove IBPB on vCPU load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On May 12, 2022, at 11:50 PM, Jim Mattson <jmattson@xxxxxxxxxx> wrote:
> 
> On Thu, May 12, 2022 at 8:19 PM Jon Kohler <jon@xxxxxxxxxxx> wrote:
>> 
>> 
>> 
>>> On May 12, 2022, at 11:06 PM, Jim Mattson <jmattson@xxxxxxxxxx> wrote:
>>> 
>>> On Thu, May 12, 2022 at 5:50 PM Jon Kohler <jon@xxxxxxxxxxx> wrote:
>>> 
>>>> You mentioned if someone was concerned about performance, are you
>>>> saying they also critically care about performance, such that they are
>>>> willing to *not* use IBPB at all, and instead just use taskset and hope
>>>> nothing ever gets scheduled on there, and then hope that the hypervisor
>>>> does the job for them?
>>> 
>>> I am saying that IBPB is not the only viable mitigation for
>>> cross-process indirect branch steering. Proper scheduling can also
>>> solve the problem, without the overhead of IBPB. Say that you have two
>>> security domains: trusted and untrusted. If you have a two-socket
>>> system, and you always run trusted workloads on socket#0 and untrusted
>>> workloads on socket#1, IBPB is completely superfluous. However, if the
>>> hypervisor chooses to schedule a vCPU thread from virtual socket#0
>>> after a vCPU thread from virtual socket#1 on the same logical
>>> processor, then it *must* execute an IBPB between those two vCPU
>>> threads. Otherwise, it has introduced a non-architectural
>>> vulnerability that the guest can't possibly be aware of.
>>> 
>>> If you can't trust your OS to schedule tasks where you tell it to
>>> schedule them, can you really trust it to provide you with any kind of
>>> inter-process security?
>> 
>> Fair enough, so going forward:
>> Should this be mandatory in all cases? How this whole effort came
>> was that a user could configure their KVM host with conditional
>> IBPB, but this particular mitigation is now always on no matter what.
>> 
>> In our previous patch review threads, Sean and I mostly settled on making
>> this particular avenue active only when a user configures always_ibpb, such
>> that for cases like the one you describe (and others like it that come up in
>> the future) can be covered easily, but for cond_ibpb, we can document
>> that it doesn’t cover this case.
>> 
>> Would that be acceptable here?
> 
> That would make me unhappy. We use cond_ibpb, and I don't want to
> switch to always_ibpb, yet I do want this barrier.

Ok gotcha, which I think is a good point for cloud providers, since the
workload(s) are especially opaque. 

How about this: I could work up a v5 patch here where this was at minimum
a system level knob (similar to other mitigation knobs) and documented
In more detail. That way folks who might want more control here have the
basic ability to do that without recompiling the kernel. Such a “knob” would
be on by default, such that there is no functional regression here.

Would that be ok with you as a middle ground?

Thanks again, 
Jon

> 
>>> 
>>>> Would this be the expectation of just KVM? Or all hypervisors on the
>>>> market?
>>> 
>>> Any hypervisor that doesn't do this is broken, but that won't keep it
>>> off the market. :-)
>> 
>> Very true :)
>> 





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux