On Mon, 28 Mar 2011, Pasi Kärkkäinen wrote:
On Sun, Mar 27, 2011 at 09:41:04AM -0400, Steve Thompson wrote:
First. With Xen I was never able to start more than 30 guests at one time
with any success; the 31st guest always failed to boot or crashed during
booting, no matter which guest I chose as the 31st. With KVM I chose to
add more guests to see if it could be done, with the result that I now
have 36 guests running simultaneously.
Hmm.. I think I've seen that earlier. I *think* it was some trivial
thing to fix, like increasing number of available loop devices or so.
I tried that, and other things, but was never able to make it work. I was
using max_loop=64 in the end, but even with a larger number I couldn't
start more than 30 guests. Number 31 would fail to boot, and would boot
successfully if I shut down, say, #17. Then #17 would fail to boot, and so
on.
Hmm.. Windows 7 might be too new for Xen 3.1 in el5, so for win7
upgrading to xen 3.4 or 4.x helps. (gitco.de has newer xen rpms for el5
if you're ok with thirdparty rpms).
Point taken; I realize this.
Third. I was never able to successfully complete a PXE-based installation
under Xen. No problems with KVM.
That's weird. I do that often. What was the problem?
I use the DHCP server (on the host) to supply all address and name
information, and this works without any issues. In the PXE case, I was
never able to get the guest to communicate with the server for long enough
to fully load pxelinux.0, in spite of the bridge setup. I have no idea
why; it's not exactly rocket science either.
Can you post more info about the benchmark? How many vcpus did the VMs have?
How much memory? Were the VMs 32b or 64b ?
The benchmark is just a "make" of a large package of my own
implementation. A top-level makefile drives a series of makes of a set of
sub-packages, 33 of them. It is a compilation of about 1100 C and C++
source files, including generation of dependencies and binaries, and
running a set of perl scripts (some of which generate some of the C
source). All of the sources and target directories were NFS volumes; only
the local O/S disks were virtualized. I used 1 vcpu per guest and either
512MB or 1GB of memory. The results I showed were for 64-bit guests with
512MB memory, but they were qualitatively the same for 32-bit guests.
Increasing memory from 512MB to 1GB made no significant difference to the
timings. Some areas of the build are serial by nature; the result of 14:38
for KVM w/virtio was changed to 9:52 with vcpu=2 and make -j2.
The 64-bit HVM guests w/o PV were quite a bit faster than the 32-bit HVM
guests, as expected. I also had some Fedora diskless guests (no PV) using
an NFS root, in which situation the 32-bit guests were faster than the
64-bit guests (and both were faster than the HVM guests w/o PV). These
used kernels that I built myself.
I did not compare Xen vs KVM with vcpu > 1.
Did you try Xen HVM with PV drivers?
Yes, but I don't have the exact timings to hand anymore. They were faster
than the non-PV case but still slower than KVM w/virtio.
Fifth: I love being able to run top/iostat/etc on the host and see just
what the hardware is really up to, and to be able to overcommit memory.
"xm top" and iostat in dom0 works well for me :)
I personally don't care much for "xm top", and it doesn't help anyway if
you're not running as root or have sudo access, or if you'd like to read
performance info for the whole shebang via /proc (as I do).
Steve
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos