On 3/30/2023 1:34 AM, Sean Christopherson wrote:
On Wed, Mar 29, 2023, Binbin Wu wrote:
On 3/29/2023 10:04 AM, Huang, Kai wrote:
On Wed, 2023-03-29 at 09:27 +0800, Binbin Wu wrote:
On 3/29/2023 7:33 AM, Huang, Kai wrote:
On Tue, 2023-03-21 at 14:35 -0700, Sean Christopherson wrote:
On Mon, Mar 20, 2023, Chao Gao wrote:
On Sun, Mar 19, 2023 at 04:49:22PM +0800, Binbin Wu wrote:
get_vmx_mem_address() and sgx_get_encls_gva() use is_long_mode()
to check 64-bit mode. Should use is_64_bit_mode() instead.
Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
Fixes: 70210c044b4e ("KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions")
It is better to split this patch into two: one for nested and one for
SGX.
It is possible that there is a kernel release which has just one of
above two flawed commits, then this fix patch cannot be applied cleanly
to the release.
The nVMX code isn't buggy, VMX instructions #UD in compatibility mode, and except
for VMCALL, that #UD has higher priority than VM-Exit interception. So I'd say
just drop the nVMX side of things.
But it looks the old code doesn't unconditionally inject #UD when in
compatibility mode?
I think Sean means VMX instructions is not valid in compatibility mode
and it triggers #UD, which has higher priority than VM-Exit, by the
processor in non-root mode.
So if there is a VM-Exit due to VMX instruction , it is in 64-bit mode
for sure if it is in long mode.
Oh I see thanks.
Then is it better to add some comment to explain, or add a WARN() if it's not in
64-bit mode?
I also prefer to add a comment if no objection.
Seems I am not the only one who didn't get it� : )
I would rather have a code change than a comment, e.g.
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f63b28f46a71..0460ca219f96 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4931,7 +4931,8 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
int base_reg = (vmx_instruction_info >> 23) & 0xf;
bool base_is_valid = !(vmx_instruction_info & (1u << 27));
- if (is_reg) {
+ if (is_reg ||
+ WARN_ON_ONCE(is_long_mode(vcpu) && !is_64_bit_mode(vcpu))) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}
The only downside is that querying is_64_bit_mode() could unnecessarily trigger a
VMREAD to get the current CS.L bit, but a measurable performance regressions is
extremely unlikely because is_64_bit_mode() all but guaranteed to be called in
these paths anyways (and KVM caches segment info), e.g. by kvm_register_read().
And then in a follow-up, we should also be able to do:
@@ -5402,7 +5403,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
if (instr_info & BIT(10)) {
kvm_register_write(vcpu, (((instr_info) >> 3) & 0xf), value);
} else {
- len = is_64_bit_mode(vcpu) ? 8 : 4;
+ len = is_long_mode(vcpu) ? 8 : 4;
if (get_vmx_mem_address(vcpu, exit_qualification,
instr_info, true, len, &gva))
return 1;
@@ -5476,7 +5477,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu)
if (instr_info & BIT(10))
value = kvm_register_read(vcpu, (((instr_info) >> 3) & 0xf));
else {
- len = is_64_bit_mode(vcpu) ? 8 : 4;
+ len = is_long_mode(vcpu) ? 8 : 4;
if (get_vmx_mem_address(vcpu, exit_qualification,
instr_info, false, len, &gva))
return 1;
Agree to replace is_64_bit_mode() with is_long_mode().
But, based on the implementation and comment of
nested_vmx_check_permission(),
do you think it still needs to add the check for compatibility mode?