On Thu, Jul 19, 2012 at 03:52:15PM +0300, Avi Kivity wrote: > On 07/19/2012 01:51 PM, Gleb Natapov wrote: > > >> > +int x86_linearize(struct x86_linearize_params *p, ulong *linear) > >> > { > >> > - struct desc_struct desc; > >> > - bool usable; > >> > ulong la; > >> > u32 lim; > >> > - u16 sel; > >> > unsigned cpl, rpl; > >> > > >> > - la = seg_base(ctxt, addr.seg) + addr.ea; > >> > - switch (ctxt->mode) { > >> > + la = get_desc_base(&p->desc) + p->ea; > >> > >> This makes 64-bit mode slower, since before the patch it avoided reading > >> the segment base for non-fs/gs segments, and only read the segment base > >> for fs/gs. After the patch we always execute 4 VMREADs (and decode the > >> results). > >> > > That's easy to fix by making caller prepare fake desc if mode is 64-bit > > and segment is non-fs/gs. The question is if this even measurable? > > I'm sure it will be measurable, esp. on older processors. Why not > measure it? > > It is easier to just fix it :) Will do and resend if you are agree with general approach. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html