> > +static void *alloc_2m_page(void) > > +{ > > + return alloc_pages(PAGE_2M_ORDER); > > +} > > Allocating 2mb pages is complete overkill. The absolute theoretical max > for the number of MSRs is (8 * 512) = 4096, for a total of 32kb per list. > We can even show the math so that it's obvious how the size is calculated. > Plus one order so we can test overrun. > /* > * The max number of MSRs is specified in 3 bits bits, plus 1. I.e. 7+1==8. > * Allocate 64k bytes of data to cover max_msr_list_size and then some. > */ > static const u32 msr_list_page_order = 4; 8 * 512 should be 16 * 512, right [1]? Therefore, the maximum theoretical is 64 kB. I'll happily apply what you've suggested in v2. But I don't see why it's so terrible to over-allocate here. Leveraging a generic 2 MB page allocator can be reused going forward, and encourages uniformity across tests. [1] struct vmx_msr_entry { u32 index; u32 reserved; u64 value; } __aligned(16); > > +static void populate_msr_list(struct vmx_msr_entry *msr_list, int count) > > +{ > > + int i; > > + > > + for (i = 0; i < count; i++) { > > + msr_list[i].index = MSR_IA32_TSC; > > + msr_list[i].reserved = 0; > > + msr_list[i].value = 0x1234567890abcdef; > > Maybe overkill, but we can use a fast string op for this. I think > I got the union right? > > static void populate_msr_list(struct vmx_msr_entry *msr_list, int count) > { > union { > struct vmx_msr_entry msr; > u64 val; > } tmp; > > tmp.msr.index = MSR_IA32_TSC; > tmp.msr.reserved = 0; > tmp.msr.value = 0x1234567890abcdef; > > asm volatile ( > "rep stosq\n\t" > : "=c"(count), "=D"(msr_list) > : "a"(tmp.val), "c"(count), "D"(msr_list) > : "memory" > ); > } :-). I do think it's overkill. However, I'm OK to apply this suggestion in v2 if everyone is OK with me adding a comment that explains it in terms of the original code, so that x86 noobs, like myself, can understand what's going on. > This is a misleading name, e.g. it took me quite a while to realize this > is testing only the passing scenario. For me, "limit test" implies that > it'd be deliberately exceeding the limit, or at least testing both the > passing and failing cases. I suppose we can't easily test the VMX abort > cases, but we can at least test VM_ENTER_LOAD. > > Distilling things down to the bare minimum yields something like the > following. Looks excellent overall. Still not clear what the consensus is on whether or not to test the VM-entry failure. I think a flag seems like a reasonable compromise. I've never added a flag to a kvm-unit-test, so I'll see if I can figure that out.