Re: qemu-img convert vs rbd import performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the information Jason!

We have few concerns:
1. Following is our ceph configuration. Is there something that needs to be changed here?
#cat /etc/ceph/ceph.conf
[global]
fsid = 0e1bd4fe-4e2d-4e30-8bc5-cb94ecea43f0
mon_initial_members = cephlarge
mon_host = 10.0.0.188
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
public network = 10.0.0.0/16
osd max object name len = 256
osd max object namespace len = 64

[client]
     rbd cache = true
     rbd readahead trigger requests = 5
     rbd readahead max bytes = 419430400
     rbd readahead disable after bytes = 0
     rbd_concurrent_management_ops = 50

2. We are using ext4 FS for cepf. Does this hamper the write performance of qemu-img convert?
3. qemu-img-1.5.3-126.el7_3.10.x86_64 is the version we are using
4. Ceph version is Jewel v10.
5. Is there a way we can control latency so that qemu-img performance can be increased?

Please provide your suggestions.


On Thu, Jul 20, 2017 at 6:50 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
Running a similar 20G import test within a single OSD VM-based cluster, I see the following:
 
$ time qemu-img convert -p -O raw -f raw  ~/image rbd:rbd/image
    (100.00/100%)

real 3m20.722s
user 0m18.859s
sys 0m20.628s

$ time rbd import ~/image 
Importing image: 100% complete...done.

real 2m11.907s
user 0m12.236s
sys 0m20.971s

Examining the IO patterns from qemu-img, I can see that it is effectively using synchronous IO (i.e. only a single write is in-flight at a time), whereas "rbd import" will send up to 10 (by default) IO requests concurrently. Therefore, the higher the latencies to your cluster, the worse qemu-img will perform as compared to "rbd import". 



On Thu, Jul 20, 2017 at 5:07 AM, Mahesh Jambhulkar <mahesh.jambhulkar@xxxxxxxxx> wrote:
Adding  rbd readahead disable after bytes = 0  did not help.

[root@cephlarge mnt]# time qemu-img convert -p -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1 rbd:volumes/24291e4b-93d2-47ad-80a8-bf3c395319b9     (100.00/100%)

real    4858m13.822s
user    73m39.656s
sys     32m11.891s
It took 80 hours to complete.

Also, its not feasible to test this with huge 465GB file every time. So I tested qemu-img convert with a 20GB file.

ParametersTime taken
-t writeback38mins
-t none38 mins
-S 4k 38 mins
With client options mentions by Irek Fasikhov 40 mins

The time taken is almost the same.

On Thu, Jul 13, 2017 at 6:40 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov <malmyzh@xxxxxxxxx> wrote:
>      rbd readahead disable after bytes = 0


There isn't any reading from an RBD image in this example -- plus
readahead disables itself automatically after the first 50MBs of IO
(i.e. after the OS should have had enough time to start its own
readahead logic).

--
Jason



--
Regards,
mahesh j



--
Jason



--
Regards,
mahesh j
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux