Re: [PATCH v6 2/7] KVM: VMX: Use is_64_bit_mode() to check 64-bit mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/30/2023 6:46 AM, Huang, Kai wrote:
On Wed, 2023-03-29 at 10:34 -0700, Sean Christopherson wrote:
On Wed, Mar 29, 2023, Binbin Wu wrote:
On 3/29/2023 10:04 AM, Huang, Kai wrote:
On Wed, 2023-03-29 at 09:27 +0800, Binbin Wu wrote:
On 3/29/2023 7:33 AM, Huang, Kai wrote:
On Tue, 2023-03-21 at 14:35 -0700, Sean Christopherson wrote:
On Mon, Mar 20, 2023, Chao Gao wrote:
On Sun, Mar 19, 2023 at 04:49:22PM +0800, Binbin Wu wrote:
get_vmx_mem_address() and sgx_get_encls_gva() use is_long_mode()
to check 64-bit mode. Should use is_64_bit_mode() instead.

Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
Fixes: 70210c044b4e ("KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions")
It is better to split this patch into two: one for nested and one for
SGX.

It is possible that there is a kernel release which has just one of
above two flawed commits, then this fix patch cannot be applied cleanly
to the release.
The nVMX code isn't buggy, VMX instructions #UD in compatibility mode, and except
for VMCALL, that #UD has higher priority than VM-Exit interception.  So I'd say
just drop the nVMX side of things.
But it looks the old code doesn't unconditionally inject #UD when in
compatibility mode?
I think Sean means VMX instructions is not valid in compatibility mode
and it triggers #UD, which has higher priority than VM-Exit, by the
processor in non-root mode.

So if there is a VM-Exit due to VMX instruction , it is in 64-bit mode
for sure if it is in long mode.
Oh I see thanks.

Then is it better to add some comment to explain, or add a WARN() if it's not in
64-bit mode?
I also prefer to add a comment if no objection.

Seems I am not the only one who didn't get it� : )
I would rather have a code change than a comment, e.g.

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f63b28f46a71..0460ca219f96 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4931,7 +4931,8 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
         int  base_reg       = (vmx_instruction_info >> 23) & 0xf;
         bool base_is_valid  = !(vmx_instruction_info & (1u << 27));
- if (is_reg) {
+       if (is_reg ||
+           WARN_ON_ONCE(is_long_mode(vcpu) && !is_64_bit_mode(vcpu))) {
                 kvm_queue_exception(vcpu, UD_VECTOR);
                 return 1;
         }


Looks good to me.

The only downside is that querying is_64_bit_mode() could unnecessarily trigger a
VMREAD to get the current CS.L bit, but a measurable performance regressions is
extremely unlikely because is_64_bit_mode() all but guaranteed to be called in
these paths anyways (and KVM caches segment info), e.g. by kvm_register_read().
Agreed.

And then in a follow-up, we should also be able to do:

@@ -5402,7 +5403,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
         if (instr_info & BIT(10)) {
                 kvm_register_write(vcpu, (((instr_info) >> 3) & 0xf), value);
         } else {
-               len = is_64_bit_mode(vcpu) ? 8 : 4;
+               len = is_long_mode(vcpu) ? 8 : 4;
                 if (get_vmx_mem_address(vcpu, exit_qualification,
                                         instr_info, true, len, &gva))
                         return 1;
@@ -5476,7 +5477,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
         if (instr_info & BIT(10))
                 value = kvm_register_read(vcpu, (((instr_info) >> 3) & 0xf));
         else {
-               len = is_64_bit_mode(vcpu) ? 8 : 4;
+               len = is_long_mode(vcpu) ? 8 : 4;
                 if (get_vmx_mem_address(vcpu, exit_qualification,
                                         instr_info, false, len, &gva))
                         return 1;

Yeah, although it's a little bit wired the actual WARN() happens after above
code change.  But I don't know how to make the code better.  Maybe we should put
the WARN() at the very beginning but this would require duplicated code in each
handle_xxx() for VMX instructions.

I checked the code again and find the comment of nested_vmx_check_permission().

"/*
 * Intel's VMX Instruction Reference specifies a common set of prerequisites
 * for running VMX instructions (except VMXON, whose prerequisites are
 * slightly different). It also specifies what exception to inject otherwise.
 * Note that many of these exceptions have priority over VM exits, so they
 * don't have to be checked again here.
 */"

I think the Note part in the comment has tried to callout why the check for compatibility mode is unnecessary.

But I have a question here, nested_vmx_check_permission() checks that the vcpu is vmxon, otherwise it will inject a #UD. Why this #UD is handled in the VMExit handler specifically?
Not all #UDs have higher priority than VM exits?

According to SDM Section "Relative Priority of Faults and VM Exits":
"Certain exceptions have priority over VM exits. These include invalid-opcode exceptions, ..."
Seems not further classifications of #UDs.

Anyway, I will seperate this patch from the LAM KVM enabling patch. And send a patch seperately if
needed later.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux