lat_rpc performance issue in kvm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, All

I’m using lat_rpc (one workload in LMBench) to measure the
inter-process communication latency between two processes
(client/server program). In linux guest in KVM, if binding the client
and server apps to separate cores, the latency is much worse than that
binding the client and server apps in same core. The number of events
to cause vm-exit is roughly same in the two test cases, which seems
like the performance down is not caused by interaction with VMM. While
in host, the latency in the two cases are not that big difference.
I used "isolcpus" boot option for both host and guest, and pin each
vcpu to each pcpu which belong to the same socket.

The data is listed below. Does anyone have any idea why?

LMbench (taskset -c 2 ./lat_rpc -s localhost)

                           host                      vm
taskset -c 2 ./lat_rpc -p tcp localhost (binding same core)
    19ms                    18ms
taskset -c 1 ./lat_rpc -p tcp localhost (binding different core)
     21ms                    48ms

The system is Intel Sandy Bridge processor with
3.11.10-301.fc20.x86_64 linux kernel.

Really appreciated if any suggestions/comments.

Thx, Xuekun



Call
Send SMS
Add to Skype
You'll need Skype CreditFree via Skype
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux