Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/15/2014 08:40 PM, Gleb Natapov wrote:
On Tue, Jul 15, 2014 at 08:28:22PM +0800, Tang Chen wrote:
On 07/15/2014 08:09 PM, Gleb Natapov wrote:
On Tue, Jul 15, 2014 at 01:52:40PM +0200, Jan Kiszka wrote:
......

I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?

I am talking about this case:
          if (cpu_has_secondary_exec_ctrls()) {a
          } else {
              exec_control |=
                 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
             vmcs_write64(APIC_ACCESS_ADDR,
                 page_to_phys(vcpu->kvm->arch.apic_access_page));
          }
We do not pin here.


Hi Gleb,


7905                 if (exec_control&
SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
......
7912                         if (vmx->nested.apic_access_page) /* shouldn't
happen */
7913 nested_release_page(vmx->nested.apic_access_page);
7914                         vmx->nested.apic_access_page =
7915                                 nested_get_page(vcpu,
vmcs12->apic_access_addr);

I thought you were talking about the problem here. We pin
vmcs12->apic_access_addr
in memory. And I think we should do the same thing to this page as to L1 vm.
Right ?
Nested kvm pins a lot of pages, it will probably be not easy to handle all of them,
so for now I am concerned with non nested case only (but nested should continue to
work obviously, just pin pages like it does now).

True. I will work on it.

And also, when using PCI passthrough, kvm_pin_pages() also pins some pages. This is
also in my todo list.

But sorry, a little strange. I didn't find where vmcs12->apic_access_addr is allocated
or initialized... Would you please tell me ?



......
7922                         if (!vmx->nested.apic_access_page)
7923                                 exec_control&=
7924 ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7925                         else
7926                                 vmcs_write64(APIC_ACCESS_ADDR,
7927 page_to_phys(vmx->nested.apic_access_page));
7928                 } else if
(vm_need_virtualize_apic_accesses(vmx->vcpu.kvm)) {
7929                         exec_control |=
7930 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
7931                         vmcs_write64(APIC_ACCESS_ADDR,
7932 page_to_phys(vcpu->kvm->arch.apic_access_page));
7933                 }

And yes, we have the problem you said here. We can migrate the page while L2
vm is running.
So I think we should enforce L2 vm to exit to L1. Right ?

We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.


apic pages for L2 and L1 are not the same page, right ?

I think, just like we are doing in patch 5/5, we cannot wait for the next L2->L1 vmexit. We should enforce a L2->L1 vmexit in mmu_notifier, just like make_all_cpus_request() does.

Am I right ?

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux