Avi Kivity wrote:
Anthony Liguori wrote:
I don't think we even need that to end this debate. I'm convinced
we have a bug somewhere. Even disabling TX mitigation, I see a ping
latency of around 300ns whereas it's only 50ns on the host. This
defies logic so I'm now looking to isolate why that is.
I'm down to 90us. Obviously, s/ns/us/g above. The exec.c changes
were the big winner... I hate qemu sometimes.
What, this:
UDP_RR test was limited by CPU consumption. QEMU was pegging a CPU with
only about 4000 packets per second whereas the host could do 14000. An
oprofile run showed that phys_page_find/cpu_physical_memory_rw where at
the top by a wide margin which makes little sense since virtio is zero
copy in kvm-userspace today.
That leaves the ring queue accessors that used ld[wlq]_phys and friends
that happen to make use of the above. That led me to try this terrible
hack below and low and beyond, we immediately jumped to 10000 pps. This
only works because almost nothing uses ld[wlq]_phys in practice except
for virtio so breaking it for the non-RAM case didn't matter.
We didn't encounter this before because when I changed this behavior, I
tested streaming and ping. Both remained the same. You can only expose
this issue if you first disable tx mitigation.
Anyway, if we're able to send this many packets, I suspect we'll be able
to also handle much higher throughputs without TX mitigation so that's
what I'm going to look at now.
Regards,
Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html