Re: [PATCH v2 10/10] KVM: selftests: Add option to run dirty_log_perf_test vCPUs in L2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 18, 2022 at 9:37 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Wed, May 18, 2022, David Matlack wrote:
> > On Wed, May 18, 2022 at 8:24 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > Page table allocations are currently hardcoded to come from memslot0.  memslot0
> > > is required to be in lower DRAM, and thus tops out at ~3gb for all intents and
> > > purposes because we need to leave room for the xAPIC.
> > >
> > > And I would strongly prefer not to plumb back the ability to specificy an alternative
> > > memslot for page table allocations, because except for truly pathological tests that
> > > functionality is unnecessary and pointless complexity.
> > >
> > > > I don't think it's very hard - walk the mem regions in kvm_vm.regions
> > > > should work for us?
> > >
> > > Yeah.  Alternatively, The test can identity map all of memory <4gb and then also
> > > map "guest_test_phys_mem - guest_num_pages".  I don't think there's any other memory
> > > to deal with, is there?
> >
> > This isn't necessary for 4-level, but also wouldn't be too hard to
> > implement. I can take a stab at implementing in v3 if we think 5-level
> > selftests are coming soon.
>
> The current incarnation of nested_map_all_1g() is broken irrespective of 5-level
> paging.  If MAXPHYADDR > 48, then bits 51:48 will either be ignored or will cause
> reserved #PF or #GP[*].  Because the test puts memory at max_gfn, identity mapping
> test memory will fail if 4-level paging is used and MAXPHYADDR > 48.

Ah good point.

I wasn't able to get a machine with MAXPHYADDR > 48 to test today so
I've just made __nested_pg_map() assert that the nested_paddr fits in
48 bits. We can add the support for 5-level paging or your idea to
restrict the perf_test_util gfn to 48-bits in a subsequent series when
it becomes necessary.

>
> I think the easist thing would be to restrict the "starting" upper gfn to the min
> of max_gfn and the max addressable gfn based on whether 4-level or 5-level paging
> is in use.
>
> [*] Intel's SDM is comically out-of-date and pretends 5-level EPT doesn't exist,
>     so I'm not sure what happens if a GPA is greater than the PWL.
>
>     Section "28.3.2 EPT Translation Mechanism" still says:
>
>     The EPT translation mechanism uses only bits 47:0 of each guest-physical address.
>
>     No processors supporting the Intel 64 architecture support more than 48
>     physical-address bits. Thus, no such processor can produce a guest-physical
>     address with more than 48 bits. An attempt to use such an address causes a
>     page fault. An attempt to load CR3 with such an address causes a general-protection
>     fault. If PAE paging is being used, an attempt to load CR3 that would load a
>     PDPTE with such an address causes a general-protection fault.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux