Thanks Daniel for the quick response! OK, I will post to Rhel Support to see if I can get some help. Cheers, Guangya -----Original Message----- From: Daniel P. Berrange [mailto:berrange@xxxxxxxxxx] Sent: Friday, May 20, 2011 10:15 PM To: Guangya Liu Cc: libvir-list@xxxxxxxxxx Subject: Re: libvirt 0.8.2 issues On Fri, May 20, 2011 at 01:25:46PM +0800, Guangya Liu wrote: > Hi libvirt support, > > Found an interesting problem for Rhel, libvirt 0.8.2 is shipped with the > distribution of Rhel5u6; libvirt 0.8.1 is shipped with the distributions > of Rhel6, do you know why, why did we downgrade the version for libvirt > when upgrading to Rhel6? That's just an artifact of the way the release timelines of RHEL5 vs RHEL6 happened to line up. RHEL-6.1 updates to libvirt 0.8.7 so is once again ahead of RHEL6. > In our production instance each compute node has 24GB of memory and each > VM has 2GB of memory (8VMs x 2 GB). So enough space for them. > > [root@lxbst0501 ~]# free > > total used free shared buffers > cached > > Mem: 24676304 8191072 16485232 0 305408 > 5255460 > > -/+ buffers/cache: 2630204 22046100 > > Swap: 4192956 0 4192956 Ok, so 16 GB free currently. > [root@lxbst0501 ~]# virsh start vmbst050107 > > error: Failed to start domain vmbst050107 > > error: internal error Process exited while reading console log output: > > Could not allocate physical memory Ok, this is an error coming from KVM itself, so it doesn't look like libvirt's fault so far. > [root@lxbst0501 ~]# LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin > > /usr/bin/kvm -S -M rhel5.4.0 -m 2048 -smp 1,sockets=1,cores=1,threads=1 > > -name vmbst050100 -uuid b7c85c96-511d-4a46-8de0-37b9edaf4ce9 -nographic > > -monitor unix:/var/lib/libvirt/qemu/vmbst050100.monitor,server,nowait > > -no-kvm-pit-reinjection -boot c -drive > > file=/dev/xen_vg/vmbst050100_1.img,if=virtio,boot=on,format=raw,cache=no > ne > > -drive > > file=/afs/cern.ch/project/isf/contextualization/isoimages/glExecWN_slc5_ > x86_64_kvm.iso,if=virtio,media=cdrom,readonly=on,format=raw > > -drive > > file=/dev/xen_vg/afscache-00-16-3e-00-49-ba,if=virtio,format=raw,cache=n > one > > -drive > > file=/dev/xen_vg/pool-00-16-3e-00-49-ba,if=virtio,format=raw,cache=none > > -drive > > file=/dev/xen_vg/cvmfs-00-16-3e-00-49-ba,if=virtio,format=raw,cache=none > > > -drive > > file=/VMOxen/storage/local/images/vmbst050100.iso,if=ide,media=cdrom,bus > =1,unit=0,readonly=on,format=raw > > -net nic,macaddr=00:16:3e:00:49:ba,vlan=0,model=virtio -net > > tap,fd=17,vlan=0 -serial pty -parallel none -usb -balloon virtio > > TUNGETIFF ioctl() failed: Bad file descriptor > > TUNSETSNDBUF ioctl failed: Bad file descriptor > > Could not allocate physical memory Again this suggests a KVM bug. > > ioctl(3, 0xc004ae02, 0x7fff6d595b10) = -1 E2BIG (Argument list too > long) Not sure what this ioctl is. I wonder if it could be a cause of trouble, or completely harmless. > > ioctl(3, 0xc004ae02, 0x196ca0e0) = 0 > > ioctl(3, 0xae03, 0x27) = 1 > > ioctl(3, 0xae03, 0x18) = 1 > > ioctl(6, 0xae71, 0x7fff6d595ca0) = 0 > > ioctl(3, 0xae03, 0x15) = 1 > > ioctl(3, 0xae03, 0x19) = 1024 > > ioctl(6, 0x4008ae6a, 0x196ca120) = 0 > > mmap(NULL, 2168475648, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, > > -1, 0) = -1 ENOMEM (Cannot allocate memory) > > brk(0x9aaef000) = 0x196d9000 > > mmap(NULL, 2168606720, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, > > -1, 0) = -1 ENOMEM (Cannot allocate memory) > > mmap(NULL, 2168475648, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, > > -1, 0) = -1 ENOMEM (Cannot allocate memory) > > brk(0x9aaef000) = 0x196d9000 > > mmap(NULL, 2168606720, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, > > -1, 0) = -1 ENOMEM (Cannot allocate memory) > > ioctl(3, 0xae03, 0x10) = 1 > > write(2, "Could not allocate physical memo"..., 35Could not allocate These ones are obviously the ultimate failure. I honestly don't know why the kernel would return ENOMEM when you have 16 GB free and only ask for 2 GB. The only thing I can think of is perhaps if the box has NUMA there is some pathelogical problem causing one node to run out of memory. IIRC there was also some bug with KVM where it would fail when a couple of specific memory amounts. I don't think that 2048 MB was such an amount, but you could try changin your guest to have 3 GB of ram or 1.5 GB etc, to see if it makes any difference. Failing that I think you had best file a support ticket with Red Hat customer services about the KVM problem Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list