On Thu, Apr 30, 2009 at 11:56:14AM +0300, Avi Kivity wrote: > Andrew Theurer wrote: >> Comparing guest time to all other busy time, that's a 23.88/43.02 = 55% >> overhead for virtualization. I certainly don't expect it to be 0, but >> 55% seems a bit high. So, what's the reason for this overhead? At the >> bottom is oprofile output of top functions for KVM. Some observations: >> >> 1) I'm seeing about 2.3% in scheduler functions [that I recognize]. >> Does that seems a bit excessive? > > Yes, it is. If there is a lot of I/O, this might be due to the thread > pool used for I/O. > >> 2) cpu_physical_memory_rw due to not using preadv/pwritev? > > I think both virtio-net and virtio-blk use memcpy(). > >> 3) vmx_[save|load]_host_state: I take it this is from guest switches? > > These are called when you context-switch from a guest, and, much more > frequently, when you enter qemu. > >> We have 180,000 context switches a second. Is this more than expected? > > > Way more. Across 16 logical cpus, this is >10,000 cs/sec/cpu. > >> I wonder if schedstats can show why we context switch (need to let >> someone else run, yielded, waiting on io, etc). >> > > Yes, there is a scheduler tracer, though I have no idea how to operate it. > > Do you have kvm_stat logs? In case the kvm_stat logs don't shed enough light, this should help. Documentation/trace/ftrace.txt: sched_switch ------------ This tracer simply records schedule switches. Here is an example of how to use it. # echo sched_switch > /debug/tracing/current_tracer # echo 1 > /debug/tracing/tracing_enabled # sleep 1 # echo 0 > /debug/tracing/tracing_enabled # cat /debug/tracing/trace -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html