On 08/30/2015 05:12 PM, Michael S. Tsirkin wrote:
Even when we skip data decoding, MMIO is slightly slower than port IO because it uses the page-tables, so the CPU must do a pagewalk on each access. This overhead is normally masked by using the TLB cache: but not so for KVM MMIO, where PTEs are marked as reserved and so are never cached. As ioeventfd memory is never read, make it possible to use RO pages on the host for ioeventfds, instead.
I like this idea.
The result is that TLBs are cached, which finally makes MMIO as fast as port IO.
What does "TLBs are cached" mean? Even after applying the patch no new TLB type can be cached.
Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> --- arch/x86/kvm/vmx.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 9d1bfd3..ed44026 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -5745,6 +5745,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI); gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); + if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) { + skip_emulated_instruction(vcpu); + return 1; + } +
I am afraid that the common page fault entry point is not a good place to do the work. Would move it to kvm_handle_bad_page()? The different is the workload of fast_page_fault() is included but it's light enough and MMIO-exit should not be very frequent, so i think it's okay. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html