Re: qemu-img convert vs rbd import performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We will give it a try. I have another cluster of similar configuration and the converts are working fine. We have not changed  any queue depth setting on that setup either. If it turns out to be queue depth how can we set queue setting  for qemu-img convert operation?

Thank you.

Sent from my iPhone

> On Jun 28, 2017, at 7:56 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
> 
> Given that your time difference is roughly 10x, best guess is that
> qemu-img is sending the IO operations synchronously (queue depth = 1),
> whereas, by default, "rbd import" will send up to 10 write requests in
> parallel to the backing OSDs. Such an assumption assumes that you have
> really high latency. You can re-run "rbd import" with
> "--rbd-concurrent-management-ops=1" to change your queue depth to 1
> and see if it's similar to qemu-img runtime.
> 
>> On Wed, Jun 28, 2017 at 5:46 PM, Murali Balcha <murali.balcha@xxxxxxxxx> wrote:
>> Need some help resolving the performance issues on the my ceph cluster. We
>> are running acute performance issues when we are using qemu-img convert.
>> However rbd import operation works perfectly alright. Please ignore image
>> format for a minute. I am trying to understand why rbd import performs well
>> on the same cluster where as qemu-img convert operation takes inordinate
>> amount of time. Here are the performance numbers:
>> 
>> 1. qemu-img convert command for 465GB data took more than 48 hours to copy
>> the image to ceph.
>> 
>> [root@redhat-compute4 ~]# qemu-img convert -p -t none -O raw
>> /var/triliovault-mounts/MTAuMC4wLjc3Oi92YXIvbmZzX3NoYXJl/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1
>> rbd:vms/volume-5ad883a0cd65435bb6ffbfa1243bbdc6
>> 
>>    (100.00/100%)
>> 
>> You have new mail in /var/spool/mail/root
>> 
>> [root@redhat-compute4 ~]#
>> 
>> 
>> 2. Just copying the file to ceph took just 3 hours 18 mins (without qemu-img
>> convert).
>> 
>> [root@redhat-compute4 vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#
>> time rbd import 66582225-6539-4e5e-9b7a-59aa16739df1 -p volumes
>> 66582225-6539-4e5e-9b7a-59aa16739df1 --image-format 2
>> 
>> Importing image: 100% complete...done.
>> 
>> 
>> real    198m9.069s
>> 
>> user    5m32.724s
>> 
>> sys     18m32.213s
>> 
>> [root@redhat-compute4 vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#
>> 
>> [root@redhat-compute4 vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#
>> rbd info volumes/66582225-6539-4e5e-9b7a-59aa16739df1
>> 
>> rbd image '66582225-6539-4e5e-9b7a-59aa16739df1':
>> 
>>        size 465 GB in 119081 objects
>> 
>>        order 22 (4096 kB objects)
>> 
>>        block_name_prefix: rbd_data.753102ae8944a
>> 
>>        format: 2
>> 
>>        features: layering
>> 
>>        flags:
>> 
>> [root@redhat-compute4 vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#
>> 
>> 
>> I appreciate if any one can give me pointers on where to look for?
>> 
>> Best,
>> 
>> Murali Balcha
>> O 508.233.3912 | M 508.494.5007 | murali.balcha@xxxxxxxxx | trilio.io
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> 
> 
> -- 
> Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux