On Wed, Oct 12, 2022, Hou Wenlong wrote: > When loading code segment descriptor in iret instruction emulation, the > checks are same as far return instruction emulation, so transfer type > should be X86_TRANSFER_RET in __load_segment_descriptor(). Although, > only iret in real mode is implemented now, and no checks are actually > needed for real mode, it would still be better to mark transfer type as > X86_TRANSFER_RET. It's not strictly a RET though. The RPL vs. DPL checks in __load_segment_descriptor() might do the right thing, but there's a rather large pile of stuff IRET can do that RET can't (ignoring the fact that KVM doesn't even emulate FAR RET to outer privilege levels). And __emulate_int_real() also loads CS with X86_TRANSFER_NONE, i.e. KVM still has a weird path to worry about. Rather than make the IRET case slightly less wrong, what about adding a sanity check in __load_segment_descriptor() that KVM doesn't attempt to load CS in Protected Mode with X86_TRANSFER_NONE? diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 3b27622d4642..fe735e18c419 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -1641,6 +1641,14 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt, goto exception; break; case VCPU_SREG_CS: + /* + * KVM uses "none" when loading CS as part of emulating Real + * Mode exceptions and IRET (handled above). In all other + * cases, loading CS without a control transfer is a KVM bug. + */ + if (WARN_ON_ONCE(transfer == X86_TRANSFER_NONE)) + goto exception; + if (!(seg_desc.type & 8)) goto exception; > > No functional change intended. > > Signed-off-by: Hou Wenlong <houwenlong.hwl@xxxxxxxxxxxx> > --- > arch/x86/kvm/emulate.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index 3b27622d4642..5052eb480068 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -2100,6 +2100,7 @@ static int emulate_iret_real(struct x86_emulate_ctxt *ctxt) > X86_EFLAGS_FIXED; > unsigned long vm86_mask = X86_EFLAGS_VM | X86_EFLAGS_VIF | > X86_EFLAGS_VIP; > + u8 cpl = ctxt->ops->cpl(ctxt); > > /* TODO: Add stack limit check */ > > @@ -2121,7 +2122,8 @@ static int emulate_iret_real(struct x86_emulate_ctxt *ctxt) > if (rc != X86EMUL_CONTINUE) > return rc; > > - rc = load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS); > + rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, cpl, > + X86_TRANSFER_RET, NULL); > > if (rc != X86EMUL_CONTINUE) > return rc; > -- > 2.31.1 >