Re: virt-install into rbd hangs during Anaconda package installation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Weird. Now the VMs that were hung in interruptable wait state have now
disappeared. No idea why.

Additional information:

ceph-mds-10.2.3-0.el7.x86_64
python-cephfs-10.2.3-0.el7.x86_64
ceph-osd-10.2.3-0.el7.x86_64
ceph-radosgw-10.2.3-0.el7.x86_64
libcephfs1-10.2.3-0.el7.x86_64
ceph-common-10.2.3-0.el7.x86_64
ceph-base-10.2.3-0.el7.x86_64
ceph-10.2.3-0.el7.x86_64
ceph-selinux-10.2.3-0.el7.x86_64
ceph-mon-10.2.3-0.el7.x86_64

    cluster b2b00aae-f00d-41b4-a29b-58859aa41375
     health HEALTH_OK
     monmap e11: 3 mons at {ceph01=10.0.5.2:6789/0,ceph03=10.0.5.4:6789/0,ceph07=10.0.5.13:6789/0}
            election epoch 76, quorum 0,1,2 ceph01,ceph03,ceph07
     osdmap e14396: 70 osds: 66 up, 66 in
            flags sortbitwise,require_jewel_osds
      pgmap v7116569: 1664 pgs, 3 pools, 7876 GB data, 1969 kobjects
            23648 GB used, 24310 GB / 47958 GB avail
                1661 active+clean
                   2 active+clean+scrubbing+deep
                   1 active+clean+scrubbing
  client io 839 kB/s wr, 0 op/s rd, 159 op/s wr


On Mon, Feb 06, 2017 at 06:57:23PM PST, Tracy Reed spake thusly:
> This is what I'm doing on my CentOS 7/KVM/virtlib server:
> 
> rbd create --size 20G pool/vm.mydomain.com
> 
> rbd map pool/vm.mydomain.com --name client.admin
> 
> virt-install --name vm.mydomain.com --ram 2048 --disk path=/dev/rbd/pool/vm.mydomain.com  --vcpus 1  --os-type linux --os-variant rhel6 --network bridge=dmz --graphics none --console pty,target_type=serial --location http://repo.mydomain.com/centos/7/os/x86_64 --extra-args "ip=en0:dhcp ks=http://repo.mydomain.com/ks/ks.cfg.vm console=ttyS0  ksdevice=eth0 inst.repo=http://10.0.10.5/http://repo.mydomain.com/centos/7/os/x86_64";
> 
> And then it creates partitions, filesystems (xfs), and
> starts installing packages. 9 times out of 10 it hangs while
> installing packages. And I have no idea why. I can't kill
> the VM. 
> 
> Trying to destroy it shows:
> 
> virsh # destroy vm.mydomain.com
> error: Failed to destroy domain vm.mydomain.com
> error: Failed to terminate process 19629 with SIGKILL:
> Device or resource busy
> 
> and then virsh ls shows:
> 
> virsh ls shows:
> 
> 127   vm.mydomain.com        in shutdown
> 
> The log for this vm in
> /var/log/libvirt/qemu/vm.mydomain.com contains only:
> 
> 2017-02-06 08:14:12.256+0000: starting up libvirt version:
> 2.0.0, package: 10.el7_3.2 (CentOS BuildSystem
> <http://bugs.centos.org>, 2016-12-06-19:53:38,
> c1bm.rdu2.centos.org), qemu version: 1.5.3
> (qemu-kvm-1.5.3-105.el7_2.7), hostname: cpu01.mydomain.com
> LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
> secclass2.mydomain.com -S -machine
> pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu
> SandyBridge,+vme,+f16c,+rdrand,+fsgsbase,+smep,+erms -m
> 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1
> -uuid 5dadf01e-b996-411f-b95f-26ce6b790bae -nographic
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-127-secclass2.mydomain./monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=utc,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot
> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1
> -boot strict=on -kernel
> /var/lib/libvirt/boot/virtinst-vmlinuz.9Ax4zt -initrd
> /var/lib/libvirt/boot/virtinst-initrd.img.ALJE43 -append
> 'ip=en0:dhcp ks=http://util1.mydomain.com/ks/ks.cfg.vm.
> console=ttyS0  ksdevice=eth0
> inst.repo=http://10.0.10.5/http://util1.mydomain.com/centos/7/os/x86_64
> method=http://util1.mydomain.com/centos/7/os/x86_64'
> -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7
> -device
> ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5
> -device
> ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1
> -device
> ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2
> -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4
> -drive
> file=/dev/rbd/security-class/secclass2.mydomain.com,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=55,id=hostnet0,vhost=on,vhostfd=57 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:87:d2:12,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-127-secclass2.mydomain./org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> -device usb-tablet,id=input0,bus=usb.0,port=1 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg
> timestamp=on
> char device redirected to /dev/pts/24 (label charserial0)
> qemu: terminating on signal 15 from pid 23385
> 
> Any ideas? If this is a libvirt/kvm problem I'll take it to the
> appropriate forum but we can install into iscsi LUNs with no problem at
> all.
> 
> Someone on IRC mentioned mkfs discard starting a zero on the rbd image
> which can take a long time but that should be doable in background and
> not hang the whole VM forever, right?
> 
> Thanks for any insight you can provide!
> 
> -- 
> Tracy Reed



> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Tracy Reed

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux