Re: KVM performance vs. Xen

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
Andrew Theurer wrote:
Avi Kivity wrote:


What's the typical I/O load (disk and network bandwidth) while the tests are running?
This is average thrgoughput:
network:    Tx: 79 MB/sec  Rx: 5 MB/sec

MB as in Byte or Mb as in bit?
Byte. There are 4 x 1 Gb adapters, each handling about 20 MB/sec or 160 Mbit/sec.

disk:    read: 17 MB/sec  write: 40 MB/sec

This could definitely cause the extra load, especially if it's many small requests (compared to a few large ones).
I don't have the request sizes at my fingertips, but we have to use a lot of disks to support this I/O, so I think it's safe to assume there are a lot more requests than a simple large sequential read/write.

The host hardware:
A 2 socket, 8 core Nehalem with SMT, and EPT enabled, lots of disks, 4 x
1 GB Ethenret

CPU time measurements with SMT can vary wildly if the system is not fully loaded. If the scheduler happens to schedule two threads on a single core, both of these threads will generate less work compared to if they were scheduled on different cores.
Understood. Even if at low loads, the scheduler does the right thing and spreads out to all the cores first, once it goes beyond 50% util, the CPU util can climb at a much higher rate (compared to a linear increase in work) because it then starts scheduling 2 threads per core, and each thread can do less work. I have always wanted something which could more accurately show the utilization of a processor core, but I guess we have to use what we have today. I will run again with SMT off to see what we get.

On the other hand, without SMT you will get to overcommit much faster, so you'll have scheduling artifacts. Unfortunately there's no good answer here (except to improve the SMT scheduler).

Yes, it is. If there is a lot of I/O, this might be due to the thread pool used for I/O.
I have a older patch which makes a small change to posix_aio_thread.c by trying to keep the thread pool size a bit lower than it is today. I will dust that off and see if it helps.

Really, I think linux-aio support can help here.
Yes, I think that would work for real block devices, but would that help for files? I am using real block devices right now, but it would be nice to also see a benefit for files in a file-system. Or maybe I am mis-understanding this, and linux-aio can be used on files?

-Andrew



Yes, there is a scheduler tracer, though I have no idea how to operate it.

Do you have kvm_stat logs?
Sorry, I don't, but I'll run that next time. BTW, I did not notice a batch/log mode the last time I ram kvm_stat. Or maybe it was not obvious to me. Is there an ideal way to run kvm_stat without a curses like output?

You're probably using an ancient version:

$ kvm_stat --help
Usage: kvm_stat [options]

Options:
 -h, --help            show this help message and exit
 -1, --once, --batch   run in batch mode for one second
 -l, --log             run in logging mode (like vmstat)
 -f FIELDS, --fields=FIELDS
                       fields to display (regex)




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux