Re: Windows Server 2008 VM performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 03, 2009 at 09:21:16AM -0500, Andrew Theurer wrote:
> Avi Kivity wrote:
>> Andrew Theurer wrote:
>>
>>
>>> Is there a virtio_block driver to test?  
>>
>> There is, but it isn't available yet.
> OK.  Can I assume a better virtio_net driver is in the works as well?
>>
>>> Can we find the root cause of the exits (is there a way to get stack  
>>> dump or something that can show where there are coming from)?
>>
>> Marcelo is working on a super-duper easy to use kvm trace which can  
>> show what's going on.  The old one is reasonably easy though it  
>> exports less data.  If you can generate some traces, I'll have a look  
>> at them.
>
> Thanks Avi.  I'll try out kvm-86 and see if I can generate some kvm  
> trace data.

clone
git://git.kernel.org/pub/scm/linux/kernel/git/marcelo/linux-2.6-x86-kvmtrace.git, 
checkout (remote) branch kvmtrace, _build with KVM=y,KVM_{INTEL,AMD}=y_ 
(there's a missing export symbol, will pull the upstream fix ASAP).

Then once running the benchmark with this kernel, on a phase thats
indicative of the workload, do:

echo kvm_exit > /debugfs/tracing/set_event
sleep n
cat /debugfs/tracing/trace > /tmp/trace-save.txt

Make n relatively small, 1 or 2 seconds. Would be nice to see UP vs SMP
Win2008 guests.

echo > /debugfs/tracing/set_event to stop tracing
echo > /debugfs/tracing/trace to zero the trace buffer

With some post processing you can the get the exit reason percentages.

Alternatively you can use systemtap (see attached script, need some
adjustment in the entry/exit line number probes) so it calculates the
exit percentages for you.


global exit_names[70]
global latency[70]
global vmexit
global vmentry
global vmx_exit_reason

// run stap -t kvm.stp to find out the overhead of which probe
// and change it here. should be automatic!
global exit_probe_overhead = 645
global entry_probe_overhead = 882
global handlexit_probe_overhead = 405
global overhead

probe begin (1)
{
    overhead = 882 + 645 + 405
    exit_names[0] = "EXCEPTION"
    exit_names[1] = "EXTERNAL INT"
    exit_names[7] = "PENDING INTERRUPT"
    exit_names[10] = "CPUID"
    exit_names[18] = "HYPERCALL"
    exit_names[28] = "CR ACCESS"
    exit_names[29] = "DR ACCESS"
    exit_names[30] = "IO INSTRUCTION (LIGHT)"
    exit_names[31] = "MSR READ"
    exit_names[32] = "MSR WRITE"
    exit_names[44] = "APIC ACCESS"
    exit_names[60] = "IO INSTRUCTION (HEAVY)"
}

probe module("kvm_intel").statement("kvm_handle_exit@arch/x86/kvm/vmx.c:3011")
{
    vmx_exit_reason = $exit_reason
}

// exit
probe module("kvm_intel").statement("vmx_vcpu_run@arch/x86/kvm/vmx.c:3226") 
{
    vmexit = get_cycles()
}

// entry
probe module("kvm_intel").function("vmx_vcpu_run")
{
    vmentry = get_cycles()
    if (vmx_exit_reason != 12)
        latency[vmx_exit_reason] <<< vmentry - vmexit - overhead
}

//heavy-exit
probe module("kvm").function("kvm_arch_vcpu_ioctl_run")
{
    vmx_exit_reason = 60
}

probe end {
       foreach (x in latency-) {
        printf("%d: %s\n", x, exit_names[x])
        printf ("avg %d = sum %d / count %d\n",
                   @avg(latency[x]), @sum(latency[x]), @count(latency[x]))
        printf ("min %d max %d\n", @min(latency[x]), @max(latency[x]))
        print(@hist_linear(latency[x], 10000000, 50000000, 10000000))
        }
        
        print("\n")
}




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux