qemu-kvm vms start or reboot hang long time while using the rbd mapped image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,
Recently.I meet a question and I did'nt find out any thing for explain it.

Ops process like blow:
ceph 10.2.5  jewel, qemu 2.5.0  centos 7.2 x86_64
create pool  rbd_vms  3  replications with cache tier pool 3 replication too.
create 100 images in rbd_vms
rbd map 100 image to local device, like  /dev/rbd0  ... /dev/rbd100
dd if=/root/win7.qcow2  of=/dev/rbd0 bs=1M count=3000
virsh define 100 vms(vm0... vm100), 1 vms  configured 1 /dev/rbd .
virsh start  100 vms.

when the 100 vms start concurrence, will caused some vms hang.
when do fio testing in those vms, will casued some vms hang .

I checked ceph status ,osd status , log etc.  all like same as before.

but check device with  iostat -dx 1,   some  rbd* device are  strange.
util% are 100% full, but  read and wirte count all are zero.

i checked virsh log, vms log etc, but not found any useful info.

Can help to fingure out some infomartion.  librbd krbd or other place is need to adjust some arguments?

Thanks All.

------------------
王勇
上海德拓信息技术股份有限公司-成都研发中心   
手机:15908149443
邮箱:wangyong@xxxxxxxxxxx
地址:四川省成都市天府大道666号希顿国际广场C座1409

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux