Re: [PATCH v7 2/5] KVM: x86: Virtualize CR3.LAM_{U48,U57}

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/13/2023 5:13 PM, Huang, Kai wrote:
On 4/13/2023 10:27 AM, Huang, Kai wrote:
On Thu, 2023-04-13 at 09:36 +0800, Binbin Wu wrote:
On 4/12/2023 7:58 PM, Huang, Kai wrote:

...
+	root_gfn = (root_pgd & __PT_BASE_ADDR_MASK) >> PAGE_SHIFT;
Or, should we explicitly mask vcpu->arch.cr3_ctrl_bits?  In this
way, below
mmu_check_root() may potentially catch other invalid bits, but in
practice there should be no difference I guess.
In previous version, vcpu->arch.cr3_ctrl_bits was used as the mask.

However, Sean pointed out that the return value of
mmu->get_guest_pgd(vcpu) could be
EPTP for nested case, so it is not rational to mask to CR3 bit(s) from EPTP.
Yes, although EPTP's high bits don't contain any control bits.

But perhaps we want to make it future-proof in case some more control
bits are added to EPTP too.

Since the guest pgd has been check for valadity, for both CR3 and
EPTP, it is safe to mask off non-address bits to get GFN.

Maybe I should add this CR3 VS. EPTP part to the changelog to make it
more undertandable.
This isn't necessary, and can/should be done in comments if needed.

But IMHO you may want to add more material to explain how nested cases
are handled.
Do you mean about CR3 or others?

This patch is about CR3, so CR3.

For nested case, I plan to add the following in the changelog:

    For nested guest:
    - If CR3 is intercepted, after CR3 handled in L1, CR3 will be checked in
      nested_vmx_load_cr3() before returning back to L2.
    - For the shadow paging case (SPT02), LAM bits are also be handled to form
      a new shadow CR3 in vmx_load_mmu_pgd().





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux