On Mon, Feb 27, 2023, Robert Hoo wrote: > Emulate HW LAM masking when doing data access under 64-bit mode. > > kvm_lam_untag_addr() implements this: per CR4/CR3 LAM bits configuration, > firstly check the linear addr conforms LAM canonical, i.e. the highest > address bit matches bit 63. Then mask out meta data per LAM configuration. > If failed in above process, emulate #GP to guest. > > Signed-off-by: Robert Hoo <robert.hu@xxxxxxxxxxxxxxx> > --- > arch/x86/kvm/emulate.c | 13 ++++++++ > arch/x86/kvm/x86.h | 70 ++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 83 insertions(+) > > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index 5cc3efa0e21c..77bd13f40711 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -700,6 +700,19 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > *max_size = 0; > switch (mode) { > case X86EMUL_MODE_PROT64: > + /* LAM applies only on data access */ > + if (!fetch && guest_cpuid_has(ctxt->vcpu, X86_FEATURE_LAM)) { Derefencing ctxt->vcpu in the emulator is not allowed. > + enum lam_type type; > + > + type = kvm_vcpu_lam_type(la, ctxt->vcpu); > + if (type == LAM_ILLEGAL) { > + *linear = la; > + goto bad; > + } else { > + la = kvm_lam_untag_addr(la, type); > + } > + } This is wildly over-engineered. Just do the untagging and let __is_canonical_address() catch any non-canonical bits that weren't stripped.