Re: qemu-kvm vms start or reboot hang long time whileusing the rbd mapped image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe the 7.3 kernel was rebased to the latest stable upstream
kernel implementations of libceph + krbd, so it's possible lots of
things were fixed. If you search this mailing list, you will find lots
of threads about debugging stuck IO in krbd.

On Tue, Jun 27, 2017 at 2:50 AM, 码云 <wang.yong@xxxxxxxxxxx> wrote:
> Hi Jason,
> In one VDI integrated test  environment, we need to known the best practise.
> It seems like librbd performance weak than krbd.
> qemu 2.5.0 is not link to librbd unless manual configure and compile it.
> By the way, rbd and libceph ko code are both adjusted lots of place in the
> centos7.3,
>  are they fixed for something?
> Tks and Rgds.
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Jason Dillaman";<jdillama@xxxxxxxxxx>;
> 发送时间: 2017年6月27日(星期二) 上午7:28
> 收件人: "码云"<wang.yong@xxxxxxxxxxx>;
> 抄送: "ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
> 主题: Re:  qemu-kvm vms start or reboot hang long time whileusing
> the rbd mapped image
>
> May I ask why you are using krbd with QEMU instead of librbd?
>
> On Fri, Jun 16, 2017 at 12:18 PM, 码云 <wang.yong@xxxxxxxxxxx> wrote:
>> Hi All,
>> Recently.I meet a question and I did'nt find out any thing for explain it.
>>
>> Ops process like blow:
>> ceph 10.2.5  jewel, qemu 2.5.0  centos 7.2 x86_64
>> create pool  rbd_vms  3  replications with cache tier pool 3 replication
>> too.
>> create 100 images in rbd_vms
>> rbd map 100 image to local device, like  /dev/rbd0  ... /dev/rbd100
>> dd if=/root/win7.qcow2  of=/dev/rbd0 bs=1M count=3000
>> virsh define 100 vms(vm0... vm100), 1 vms  configured 1 /dev/rbd .
>> virsh start  100 vms.
>>
>> when the 100 vms start concurrence, will caused some vms hang.
>> when do fio testing in those vms, will casued some vms hang .
>>
>> I checked ceph status ,osd status , log etc.  all like same as before.
>>
>> but check device with  iostat -dx 1,   some  rbd* device are  strange.
>> util% are 100% full, but  read and wirte count all are zero.
>>
>> i checked virsh log, vms log etc, but not found any useful info.
>>
>> Can help to fingure out some infomartion.  librbd krbd or other place is
>> need to adjust some arguments?
>>
>> Thanks All.
>>
>> ------------------
>> 王勇
>> 上海德拓信息技术股份有限公司-成都研发中心
>> 手机:15908149443
>> 邮箱:wangyong@xxxxxxxxxxx
>> 地址:四川省成都市天府大道666号希顿国际广场C座1409
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Jason



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux