Re: [PATCH] KVM: x86: Add memory barrier on vmcs field lookup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The attack scenario that I had in mind is a variant 1 side channel
attack (CVE-2017-5753,
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html).
This particularly scenario would involve an L1 hypervisor using
vmread/vmwrite to try execute a variant 1 side channel leak on the
host.

In general variant 1 relies on a bounds check that gets bypassed
speculatively.  However it requires a fairly specific code pattern to
actually be useful for an exploit, which is why most bounds check do
not require speculation barrier. It requires two memory references
close to each other.  One that is out of bounds and attacker
controlled and one where the memory address is based on the memory
read in the first access.  The first memory reference is a read of the
memory that the attacker wants to leak and the second references
creates side channel in the cache where the line accessed represents
the data to be leaked.

This code has that pattern because a potentially very large value for
field could be used in the vmcs_to_offset_table lookup which will be
put into f.  Then very shortly thereafter and potentially still in the
speculation window f will be dereferenced in the vmcs_to_field_offset
function.



On Fri, Jan 12, 2018 at 10:50 AM, Liran Alon <liran.alon@xxxxxxxxxx> wrote:
>
> ----- jmattson@xxxxxxxxxx wrote:
>
>> From: Andrew Honig <ahonig@xxxxxxxxxx>
>>
>> This adds a memory barrier when performing a lookup into
>> the vmcs_field_to_offset_table.  This is related to
>> CVE-2017-5753.
>>
>> Signed-off-by: Andrew Honig <ahonig@xxxxxxxxxx>
>> Reviewed-by: Jim Mattson <jmattson@xxxxxxxxxx>
>> ---
>>  arch/x86/kvm/vmx.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index cef1882e5d57..f304ebb20229 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -1084,6 +1084,7 @@ static inline struct vmcs_field_info
>> *to_vmcs_field_info(unsigned long field)
>>       if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
>>               return NULL;
>>
>> +     asm("lfence"); // Change to gmb() when it's available.
>>       f = &vmcs_field_to_offset_table[field];
>>       if (f->offset == 0)
>>               return NULL;
>> --
>> 2.16.0.rc1.238.g530d649a79-goog
>
> Can you elaborate on why this specific place was chosen to have a "speculation barrier"?
> What is the attack scenario that you had in mind?
>
> Is it because when L1 executes VMREAD/VMWRITE with a general-purpose-register,
> it will be read by handle_vmread()/handle_vmwrite() from VMCS and used as an index to vmcs_field_to_offset_table[] which could be used speculatively out-of-bound?
>
> Regards,
> -Liran



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux