Re: [PATCH] kvm: x86: Fix several SPTE mask calculation errors caused by MKTME

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2019-04-24 at 08:57 -0700, Sean Christopherson wrote:
> On Wed, Apr 24, 2019 at 05:13:12AM -0700, Huang, Kai wrote:
> > 
> > > > >  	low_phys_bits = boot_cpu_data.x86_phys_bits;
> > > > > -	if (boot_cpu_data.x86_phys_bits <
> > > > > -	    52 - shadow_nonpresent_or_rsvd_mask_len) {
> > > > > +	shadow_nonpresent_or_rsvd_mask = 0;
> > > > > +	if (need_l1tf) {
> > > > >  		shadow_nonpresent_or_rsvd_mask =
> > > > >  			rsvd_bits(boot_cpu_data.x86_phys_bits -
> > > > >  				  shadow_nonpresent_or_rsvd_mask_len,
> > > > 
> > > > This is broken, the reserved bits mask is being calculated with the wrong
> > > > number of physical bits.  I think fixing this would eliminate the need for
> > > > the high_gpa_offset shenanigans.
> > > 
> > > You are right. should use 'shadow_phys_bits' instead. Thanks. Let me think
> > > whether high_gpa_offset can be avoided.
> > > 
> > 
> > Hi Sean, Paolo, and others,
> > 
> > After re-thinking, I think we should even use boot_cpu_data.x86_cache_bits to
> > calculate shadow_nonpresent_or_rsvd_mask, but not shadow_phys_bits, since for
> > some particular Intel CPU, the internal cache bits are larger than physical
> > address bits reported by CPUID. To make this KVM L1TF migitation work, we
> > actually have to set the highest bit of cache bits, but not the physical
> > address bits in SPTE (which means the original code also has a bug if I
> > understand correctly).
> 
> What's the exact CPU behavior you're referencing?  Unless it's doing some
> crazy PA aliasing it should be a non-issue.

There's no spec saying the exact behavior, but I am looking at this:

https://software.intel.com/security-software-guidance/insights/deep-dive-intel-analysis-l1-terminal-
fault

	"Some processors may internally implement more address bits in the L1D cache than are
         reported in MAXPHYADDR. This is not reported by CPUID, so the following table can be used:

	Table 2: Processors Implementing More L1D Address Bits than Reported
	Processor code name	Implemented L1D bits
	Nehalem, Westmere	44
	Sandy Bridge and newer	46
	On these systems the OS can set one or more bits above MAXPHYADDR but below the L1D limit 
	to ensure that the PTE does not reference any physical memory address. This can often be 
	used to avoid limiting the amount of usable physical memory."

And kernel actually set limit of usable physical memory by checking boot_cpu_data.x86_cache_bits:

static inline unsigned long long l1tf_pfn_limit(void)
{
        return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
}

Thanks,
-Kai




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux