Re: qemu-img convert vs rbd import performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

You need to add to the ceph.conf
[client]
     rbd cache = true
     rbd readahead trigger requests = 5
     rbd readahead max bytes = 419430400
     rbd readahead disable after bytes = 0
     rbd_concurrent_management_ops = 50

2017-07-13 15:29 GMT+03:00 Mahesh Jambhulkar <mahesh.jambhulkar@xxxxxxxxx>:
Seeing some performance issues on my ceph cluster with qemu-img convert directly writing to ceph against normal rbd import command.

Direct data copy (without qemu-img convert) took 5 hours 43 minutes for 465GB data.

[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]# time rbd import 66582225-6539-4e5e-9b7a-59aa16739df1 -p volumes 66582225-6539-4e5e-9b7a-59aa16739df1_directCopy --image-format 2
rbd: --pool is deprecated for import, use --dest-pool
Importing image: 100% complete...done.

real    343m38.028s
user    4m40.779s
sys     7m18.916s
[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]# rbd info volumes/66582225-6539-4e5e-9b7a-59aa16739df1_directCopy
rbd image '66582225-6539-4e5e-9b7a-59aa16739df1_directCopy':
        size 465 GB in 119081 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.373174b0dc51
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        flags:
[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#

Qemu-img convert is still in progress and completed merely 10% in more than 40 hours. (for 465GB data)

[root@cephlarge mnt]# time qemu-img convert -p -t none -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1 rbd:volumes/24291e4b-93d2-47ad-80a8-bf3c395319b9
    (0.00/100%)


    (10.00/100%)

Rbd bench-write shows speed of ~21MB/s.
[root@cephlarge ~]# rbd bench-write image01 --pool=rbdbench
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    2      6780   3133.53  12834946.35
    3      6831   1920.65  7866998.17
    4      8896   2040.50  8357871.83
    5     13058   2562.61  10496432.34
    6     17225   2836.78  11619432.99
    7     20345   2736.84  11210076.25
    8     23534   3761.57  15407392.94
    9     25689   3601.35  14751109.98
   10     29670   3391.53  13891695.57
   11     33169   3218.29  13182107.64
   12     36356   3135.34  12842344.21
   13     38431   2972.62  12175863.99
   14     47780   4389.77  17980497.11
   15     55452   5156.40  21120627.26
   16     59298   4772.32  19547440.33
   17     61437   5151.20  21099315.94
   18     67702   5861.64  24009295.97
   19     77086   5895.03  24146032.34
   20     85474   5936.09  24314243.88
   21     93848   7499.73  30718898.25
   22    100115   7783.39  31880760.34
   23    105405   7524.76  30821410.70
   24    111677   6797.12  27841003.78
   25    116971   6274.51  25700386.48
   26    121156   5468.77  22400087.81
   27    126484   5345.83  21896515.02
   28    137937   6412.41  26265239.30
   29    143229   6347.28  25998461.13
   30    149505   6548.76  26823729.97
   31    159978   7815.37  32011752.09
   32    171431   8821.65  36133479.15
   33    181084   8795.28  36025472.27
   35    182856   6322.41  25896605.75
   36    186891   5592.25  22905872.73
   37    190906   4876.30  19973339.07
   38    190943   3076.87  12602853.89
   39    190974   1536.79  6294701.64
   40    195323   2344.75  9604081.07
   41    198479   2703.00  11071492.89
   42    208893   3918.55  16050365.70
   43    214172   4702.42  19261091.89
   44    215263   5167.53  21166212.98
   45    219435   5392.57  22087961.94
   46    225731   5242.85  21474728.85
   47    234101   5009.43  20518607.70
   48    243529   6326.00  25911280.08
   49    254058   7944.90  32542315.10
elapsed:    50  ops:   262144  ops/sec:  5215.19  bytes/sec: 21361431.86
[root@cephlarge ~]#

This CEPH deployment has 2 OSDs. 

It would be of great help if anyone can give me pointers.

--
Regards,
mahesh j

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux