Re: [PATCH RFC 1/3] vmx: allow ioeventfd for EPT violations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 08/31/2015 03:46 PM, Michael S. Tsirkin wrote:
On Mon, Aug 31, 2015 at 10:53:58AM +0800, Xiao Guangrong wrote:


On 08/30/2015 05:12 PM, Michael S. Tsirkin wrote:
Even when we skip data decoding, MMIO is slightly slower
than port IO because it uses the page-tables, so the CPU
must do a pagewalk on each access.

This overhead is normally masked by using the TLB cache:
but not so for KVM MMIO, where PTEs are marked as reserved
and so are never cached.

As ioeventfd memory is never read, make it possible to use
RO pages on the host for ioeventfds, instead.

I like this idea.

The result is that TLBs are cached, which finally makes MMIO
as fast as port IO.

What does "TLBs are cached" mean? Even after applying the patch
no new TLB type can be cached.

The Intel manual says:
	No guest-physical mappings or combined mappings are created with
	information derived from EPT paging-structure entries that are not present
	(bits 2:0 are all 0) or that are misconfigured (see Section 28.2.3.1).

	No combined mappings are created with information derived from guest
	paging-structure entries that are not present or that set reserved bits.

Thus mappings that result in EPT violation are created, this makes
EPT violation preferable to EPT misconfiguration.

Hmm... but your logic completely bypasses page-table-installation, the page
table entry is nonpresent forever for eventfd memory.




Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
---
  arch/x86/kvm/vmx.c | 5 +++++
  1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9d1bfd3..ed44026 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5745,6 +5745,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
  		vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);

  	gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
+	if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
+		skip_emulated_instruction(vcpu);
+		return 1;
+	}
+

I am afraid that the common page fault entry point is not a good place to do the
work.

Why isn't it?

1) You always do bus_write even if it is a read access. You can not assume that the
   memory region can't be read by guest.

2) The workload of _bus_write is added for all kinds of page fault, normal #PF is fair
   frequent than #PF happens on RO memory.

3) It completely bypasses the logic of handing RO memslot.


Would move it to kvm_handle_bad_page()? The different is the workload of
fast_page_fault() is included but it's light enough and MMIO-exit should not be
very frequent, so i think it's okay.

That was supposed to be a slow path, I doubt it'll work well without
major code restructuring.
IIUC by design everything that's not going through fast_page_fault
is supposed to be slow path that only happens rarely.


Do you have performance numbers which compare this patch and the way i figured out?

But in this case, the page stays read-only, we need a new fast path
through the code.


Another solution is making MMU recognise the RO region which is write-mostly, then
make the page table entry be reserved other than readonly.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux