Re: [PATCH] kvm: x86: Fix several SPTE mask calculation errors caused by MKTME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 24, 2019 at 05:13:12AM -0700, Huang, Kai wrote:
> 
> > > >  	low_phys_bits = boot_cpu_data.x86_phys_bits;
> > > > -	if (boot_cpu_data.x86_phys_bits <
> > > > -	    52 - shadow_nonpresent_or_rsvd_mask_len) {
> > > > +	shadow_nonpresent_or_rsvd_mask = 0;
> > > > +	if (need_l1tf) {
> > > >  		shadow_nonpresent_or_rsvd_mask =
> > > >  			rsvd_bits(boot_cpu_data.x86_phys_bits -
> > > >  				  shadow_nonpresent_or_rsvd_mask_len,
> > > 
> > > This is broken, the reserved bits mask is being calculated with the wrong
> > > number of physical bits.  I think fixing this would eliminate the need for
> > > the high_gpa_offset shenanigans.
> > 
> > You are right. should use 'shadow_phys_bits' instead. Thanks. Let me think
> > whether high_gpa_offset can be avoided.
> > 
> 
> Hi Sean, Paolo, and others,
> 
> After re-thinking, I think we should even use boot_cpu_data.x86_cache_bits to
> calculate shadow_nonpresent_or_rsvd_mask, but not shadow_phys_bits, since for
> some particular Intel CPU, the internal cache bits are larger than physical
> address bits reported by CPUID. To make this KVM L1TF migitation work, we
> actually have to set the highest bit of cache bits, but not the physical
> address bits in SPTE (which means the original code also has a bug if I
> understand correctly).

What's the exact CPU behavior you're referencing?  Unless it's doing some
crazy PA aliasing it should be a non-issue.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux