On Mon, Jan 30, 2012 at 12:24:11PM +0200, Avi Kivity wrote: > > + > > ctxt->ops->set_segment(ctxt, selector, &desc, base3, seg); > > } > > > > @@ -2273,6 +2281,24 @@ static int load_state_from_tss32(struct x86_emulate_ctxt *ctxt, > > return emulate_gp(ctxt, 0); > > ctxt->_eip = tss->eip; > > ctxt->eflags = tss->eflags | 2; > > + > > + /* > > + * If we're switching between Protected Mode and VM86, we need to make > > + * sure to update the mode before loading the segment descriptors so > > + * that the selectors are interpreted correctly. > > + * > > + * Need to get it to the vcpu struct immediately because it influences > > + * the CPL which is checked at least when loading the segment > > + * descriptors and when pushing an error code to the new kernel stack. > > + */ > > + if (ctxt->eflags & X86_EFLAGS_VM) > > + ctxt->mode = X86EMUL_MODE_VM86; > > + else > > + ctxt->mode = X86EMUL_MODE_PROT32; > > + > > Shouldn't this be done after the set_segment_selector() block? My > interpretation of the SDM is that if a fault happens while loading > descriptors the fault happens with old segment cache, that is, it needs > to be interpreted according to the old mode. > No, spec says: Any errors associated with this loading and qualification occur in the context of the new task. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html