Re: Large sized guest taking for ever to boot...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]

On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +0000, Chegu Vinod wrote:
Hello,

I picked up a recent version of the qemu (1.0.92 with some fixes)
and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel).
BTW, I observe the same thing if i were to use 1.1.50 version of the
qemu... not sure if this is really
related to qemu...

While trying to boot a large guest (80 vcpus + 512GB) I observed
that the guest
took for ever to boot up...  ~1 hr or even more. [This wasn't the
case when I
was using RHEL 6.x related bits]
Was either case using device assignment?  Device assignment will map
and
pin each page of guest memory before startup, which can be a noticeable
pause on smallish (<16GB) guests.  That should be linear scaling though
and if you're using qemu and not qemu-kvm, not related.  Thanks,
I don't have any device assignment at this point . Yes I am using qemu
(not qemu-kvm)...
Just to be safe, are you using --enable-kvm with qemu?
Yes...
Unless you are using current qemu.git master (where it is enabled by
default), --enable-kvm does not activate the in-kernel irqchip support
for you. Not sure if that can make such a huge difference, but it is a
variation from qemu-kvm. You can enable it in qemu-1.1 with -machine
kernel_irqchip=on.

Thanks for pointing this out...i will add that.

I was using qemu.git    not the master


-----

/etc/qemu-ifup tap0

/usr/local/bin/qemu-system-x86_64 -enable-kvm \

-cpu
Westmere,+rdtscp,+pdpe1gb,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pclmuldq,+pbe,+tm,+ht,+s

s,+acpi,+ds,+vme \
-m 524288 -smp 80,sockets=80,cores=1,threads=1 \
-name vm1 \
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm1.monitor,server,nowait
\
-drive
file=/dev/libvirt_lvm/vm.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
\
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
\
-monitor stdio \
-net nic,macaddr=52:54:00:71:01:01 \
-net tap,ifname=tap0,script=no,downscript=no \
-vnc :4

/etc/qemu-ifdown tap0

----

The issue seems very basic... 'was earlier running RHEL6.3 RC1 on the
host and the guest and the host and the guest seemed to boot fine..
Note that RHEL is based on qemu-kvm.  Thanks,
Yep..knew that :)

I was using upstream qemu-kvm and was encouraged to move away from
it...to qemu.
And that is good. :)

Is the problem present in current qemu-kvm.git? If yes, can you bisect
when it was introduced?
Shall try out the qemu-kvm.git  (after finishing some experiments..).

BTW, another data point ...if I try to boot a the RHEL6.3 kernel in the guest (with the latest qemu.git and the 3.4.1 on the host) it boots just fine....

So something to do with the 3.4.1 kernel in the guest and the existing udev... Need to dig
deeper.

Vinod

Jan


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux