On Wed, Oct 31, 2018 at 7:15 AM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Wed, Oct 31, 2018 at 07:12:16AM -0700, Dave Hansen wrote: > > On 10/31/18 6:26 AM, Marc Orr wrote: > > > +/* > > > + * To prevent vmx_msr_entry array from crossing a page boundary, require: > > > + * sizeof(*vmx_msrs.vmx_msr_entry.val) to be a power of two. This is guaranteed > > > + * through compile-time asserts that: > > > + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) is a power of two > > > + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) <= PAGE_SIZE > > > + * - The allocation of vmx_msrs.vmx_msr_entry.val is aligned to its size. > > > + */ > > > > Why do we need to prevent them from crossing a page boundary? > > The VMCS takes the physical address of the load/store lists. I > requested that this information be added to the changelog. Marc > deferred addressing my comments since there's a decent chance > patches 3/4 and 4/4 will be dropped in the end. Exactly. And the code (in these patches) to map these virtual address to physical addresses operates at page granularity, and will break for memory that spans a single page.