> > > low_phys_bits = boot_cpu_data.x86_phys_bits; > > > - if (boot_cpu_data.x86_phys_bits < > > > - 52 - shadow_nonpresent_or_rsvd_mask_len) { > > > + shadow_nonpresent_or_rsvd_mask = 0; > > > + if (need_l1tf) { > > > shadow_nonpresent_or_rsvd_mask = > > > rsvd_bits(boot_cpu_data.x86_phys_bits - > > > shadow_nonpresent_or_rsvd_mask_len, > > > > This is broken, the reserved bits mask is being calculated with the wrong > > number of physical bits. I think fixing this would eliminate the need for > > the high_gpa_offset shenanigans. > > You are right. should use 'shadow_phys_bits' instead. Thanks. Let me think whether high_gpa_offset > can be avoided. > Hi Sean, Paolo, and others, After re-thinking, I think we should even use boot_cpu_data.x86_cache_bits to calculate shadow_nonpresent_or_rsvd_mask, but not shadow_phys_bits, since for some particular Intel CPU, the internal cache bits are larger than physical address bits reported by CPUID. To make this KVM L1TF migitation work, we actually have to set the highest bit of cache bits, but not the physical address bits in SPTE (which means the original code also has a bug if I understand correctly). Comments? Thanks, -Kai