On Thu, Apr 21, 2022 at 10:14 AM Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > On 4/21/22 18:51, Peter Oskolkov wrote: > > Allow kvm-based VMMs to request KVM to pass a custom vmcall > > from the guest to the VMM in the host. > > > > Quite often, operating systems research projects and/or specialized > > paravirtualized workloads would benefit from a extra-low-overhead, > > extra-low-latency guest-host communication channel. > > You can use a memory page and an I/O port. It should be as fast as a > hypercall. You can even change it to use ioeventfd if an asynchronous > channel is enough, and then it's going to be less than 1 us latency. So this function: uint8_t hyperchannel_ping(uint8_t arg) { uint8_t inb; uint16_t port = PORT; asm( "outb %[arg] , %[port] \n\t" // write arg "inb %[port], %[inb] \n\t" // read res : [inb] "=r"(inb) : [arg] "r"(arg), [port] "r"(port) ); return inb; } takes about 5.5usec vs 2.5usec for a vmcall on the same hardware/kernel/etc. I've also tried AF_VSOCK, and a roundtrip there is 30-50usec. The main problem of port I/O vs a vmcall is that with port I/O a second VM exit is needed to return any result to the guest. Am I missing something? I'll try now using ioeventfd, but I suspect that building a synchronous request/response channel on top of it will not match a direct vmcall in terms of latency. Are there any other alternatives I should look at? Thanks, Peter > > Paolo > > > With cloud-hypervisor modified to handle the new hypercall (simply > > return the sum of the received arguments), the following function in > > guest_userspace_ completes, on average, in 2.5 microseconds (walltime) > > on a relatively modern Intel Xeon processor: >