On 2/11/2023 1:57 PM, Robert Hoo wrote:
On Fri, 2023-02-10 at 23:04 +0800, Chao Gao wrote:
@@ -1809,6 +1809,10 @@ static int __kvm_set_msr(struct kvm_vcpu
*vcpu, u32 index, u64 data,
case MSR_KERNEL_GS_BASE:
case MSR_CSTAR:
case MSR_LSTAR:
+ /*
+ * The strict canonical checking still applies to MSR
+ * writing even LAM is enabled.
+ */
if (is_noncanonical_address(data, vcpu))
LAM spec says:
Processors that support LAM continue to require the addresses
written to
control registers or MSRs be 57-bit canonical if the processor
supports
5-level paging or 48-bit canonical if it supports only 4-level
paging
My understanding is 57-bit canonical checking is performed if the
processor
__supports__ 5-level paging. Then the is_noncanonical_address() here
is
arguably wrong. Could you double-confirm and fix it?
Emm, condition of "support" V.S. "enabled", you mean
vcpu_virt_addr_bits(), right?
{
return kvm_read_cr4_bits(vcpu, X86_CR4_LA57) ? 57 : 48;
}
This is worth double-confirm. Let me check with Yu. Thanks.
I have a check about the definition of canonical address checking in
Intel SDM.
It says: “In 64-bit mode, an address is considered to be in canonical
form if address bits 63 through to the most-significant implemented bit
by the microarchitecture are set to either all ones or all zeros”.
So the processor canonical address checking bit width depends on the
"support" of 5-level paging, not the "enable status" of it.
And the description is aligned with LAM spec.
The current code for canonical check is not strictly aligned with the
hardware behavior.
IMHO, the current code can work properly becasue the design of virtual
memory map of OSes in 4-level paging mode still makes bits 63:47
identical for canonical addresses (, which is reasonable and necessary
for back compatibility), after the processor supports 5-level paging .
So for functionality, the current implementation should be OK.
Want to hear more options about whether we need to modify the code to
strictly align with the hardware behavior.