Re: qemu-img convert vs rbd import performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adding  rbd readahead disable after bytes = 0  did not help.

[root@cephlarge mnt]# time qemu-img convert -p -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snapshot_9f0cee13-8200-4562-82ec-1fb9f234bcd8/vm_id_05e9534e-5c84-4487-9613-1e0e227e4c1a/vm_res_id_24291e4b-93d2-47ad-80a8-bf3c395319b9_vdb/66582225-6539-4e5e-9b7a-59aa16739df1 rbd:volumes/24291e4b-93d2-47ad-80a8-bf3c395319b9     (100.00/100%)

real    4858m13.822s
user    73m39.656s
sys     32m11.891s
It took 80 hours to complete.

Also, its not feasible to test this with huge 465GB file every time. So I tested qemu-img convert with a 20GB file.

ParametersTime taken
-t writeback38mins
-t none38 mins
-S 4k 38 mins
With client options mentions by Irek Fasikhov 40 mins

The time taken is almost the same.

On Thu, Jul 13, 2017 at 6:40 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov <malmyzh@xxxxxxxxx> wrote:
>      rbd readahead disable after bytes = 0


There isn't any reading from an RBD image in this example -- plus
readahead disables itself automatically after the first 50MBs of IO
(i.e. after the OS should have had enough time to start its own
readahead logic).

--
Jason



--
Regards,
mahesh j
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux