Re: [PATCH] KVM: x86: fix L1TF's MMIO GFN calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2018-09-25 at 12:33 -0700, Junaid Shahid wrote:
> On 09/25/2018 07:48 AM, Sean Christopherson wrote:

...

> > @@ -423,6 +432,8 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
> >  
> >  static void kvm_mmu_reset_all_pte_masks(void)
> >  {
> > +	u8 low_gpa_bits;
> > +
> >  	shadow_user_mask = 0;
> >  	shadow_accessed_mask = 0;
> >  	shadow_dirty_mask = 0;
> > @@ -437,12 +448,16 @@ static void kvm_mmu_reset_all_pte_masks(void)
> >  	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
> >  	 * assumed that the CPU is not vulnerable to L1TF.
> >  	 */
> > +	low_gpa_bits = boot_cpu_data.x86_phys_bits;
> >  	if (boot_cpu_data.x86_phys_bits <
> > -	    52 - shadow_nonpresent_or_rsvd_mask_len)
> > +	    52 - shadow_nonpresent_or_rsvd_mask_len) {
> >  		shadow_nonpresent_or_rsvd_mask =
> >  			rsvd_bits(boot_cpu_data.x86_phys_bits -
> >  				  shadow_nonpresent_or_rsvd_mask_len,
> >  				  boot_cpu_data.x86_phys_bits - 1);
> > +		low_gpa_bits -= shadow_nonpresent_or_rsvd_mask_len;
> > +	}
> > +	shadow_nonpresent_or_rsvd_lower_gpa_mask = (1ULL << low_gpa_bits) - 1;
> I think that it might be slightly better to do something like:
> 
> +	shadow_nonpresent_or_rsvd_lower_gpa_mask = rsvd_bits(PAGE_SHIFT, low_gpa_bits - 1);
> 
> Of course, it doesn't matter for get_mmio_spte_gfn() because that already shifts by PAGE_SHIFT, but could matter if this were to get used somewhere else.

Good point, we're providing a mask for the GFN, not the GPA.  I'll
send a v2.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux