Seeing some performance issues on my ceph cluster with qemu-img convert directly writing to ceph against normal rbd import command.
Rbd bench-write shows speed of ~21MB/s.
Direct data copy (without qemu-img convert) took 5 hours 43 minutes for 465GB data.
[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]# time rbd import 66582225-6539-4e5e-9b7a- 59aa16739df1 -p volumes 66582225-6539-4e5e-9b7a- 59aa16739df1_directCopy --image-format 2 rbd: --pool is deprecated for import, use --dest-poolImporting image: 100% complete...done.real 343m38.028suser 4m40.779ssys 7m18.916s[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]# rbd info volumes/66582225-6539-4e5e- 9b7a-59aa16739df1_directCopy rbd image '66582225-6539-4e5e-9b7a-59aa16739df1_directCopy': size 465 GB in 119081 objectsorder 22 (4096 kB objects)block_name_prefix: rbd_data.373174b0dc51format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenflags:[root@cephlarge vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb]#
Qemu-img convert is still in progress and completed merely 10% in more than 40 hours. (for 465GB data)
[root@cephlarge mnt]# time qemu-img convert -p -t none -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/ snapshot_9f0cee13-8200-4562- 82ec-1fb9f234bcd8/vm_id_ 05e9534e-5c84-4487-9613- 1e0e227e4c1a/vm_res_id_ 24291e4b-93d2-47ad-80a8- bf3c395319b9_vdb/66582225- 6539-4e5e-9b7a-59aa16739df1 rbd:volumes/24291e4b-93d2- 47ad-80a8-bf3c395319b9 (0.00/100%)(10.00/100%)
This CEPH deployment has 2 OSDs.[root@cephlarge ~]# rbd bench-write image01 --pool=rbdbenchbench-write io_size 4096 io_threads 16 bytes 1073741824 pattern sequentialSEC OPS OPS/SEC BYTES/SEC2 6780 3133.53 12834946.353 6831 1920.65 7866998.174 8896 2040.50 8357871.835 13058 2562.61 10496432.346 17225 2836.78 11619432.997 20345 2736.84 11210076.258 23534 3761.57 15407392.949 25689 3601.35 14751109.9810 29670 3391.53 13891695.5711 33169 3218.29 13182107.6412 36356 3135.34 12842344.2113 38431 2972.62 12175863.9914 47780 4389.77 17980497.1115 55452 5156.40 21120627.2616 59298 4772.32 19547440.3317 61437 5151.20 21099315.9418 67702 5861.64 24009295.9719 77086 5895.03 24146032.3420 85474 5936.09 24314243.8821 93848 7499.73 30718898.2522 100115 7783.39 31880760.3423 105405 7524.76 30821410.7024 111677 6797.12 27841003.7825 116971 6274.51 25700386.4826 121156 5468.77 22400087.8127 126484 5345.83 21896515.0228 137937 6412.41 26265239.3029 143229 6347.28 25998461.1330 149505 6548.76 26823729.9731 159978 7815.37 32011752.0932 171431 8821.65 36133479.1533 181084 8795.28 36025472.2735 182856 6322.41 25896605.7536 186891 5592.25 22905872.7337 190906 4876.30 19973339.0738 190943 3076.87 12602853.8939 190974 1536.79 6294701.6440 195323 2344.75 9604081.0741 198479 2703.00 11071492.8942 208893 3918.55 16050365.7043 214172 4702.42 19261091.8944 215263 5167.53 21166212.9845 219435 5392.57 22087961.9446 225731 5242.85 21474728.8547 234101 5009.43 20518607.7048 243529 6326.00 25911280.0849 254058 7944.90 32542315.10elapsed: 50 ops: 262144 ops/sec: 5215.19 bytes/sec: 21361431.86[root@cephlarge ~]#
It would be of great help if anyone can give me pointers.
--
Regards,
mahesh j
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com