Thanks for the information Jason!
We have few concerns:
1. Following is our ceph configuration. Is there something that needs to be changed here?
#cat /etc/ceph/ceph.conf[global]fsid = 0e1bd4fe-4e2d-4e30-8bc5-cb94ecea43f0mon_initial_members = cephlargemon_host = 10.0.0.188auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxosd pool default size = 2public network = 10.0.0.0/16osd max object name len = 256osd max object namespace len = 64[client]rbd cache = truerbd readahead trigger requests = 5rbd readahead max bytes = 419430400rbd readahead disable after bytes = 0rbd_concurrent_management_ops = 50
2. We are using ext4 FS for cepf. Does this hamper the write performance of qemu-img convert?
3. qemu-img-1.5.3-126.el7_3.10.x86_64 is the version we are using
4. Ceph version is Jewel v10.
5. Is there a way we can control latency so that qemu-img performance can be increased?
Please provide your suggestions.
On Thu, Jul 20, 2017 at 6:50 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
Running a similar 20G import test within a single OSD VM-based cluster, I see the following:$ time qemu-img convert -p -O raw -f raw ~/image rbd:rbd/image(100.00/100%)real 3m20.722suser 0m18.859ssys 0m20.628s$ time rbd import ~/imageImporting image: 100% complete...done.real 2m11.907suser 0m12.236ssys 0m20.971sExamining the IO patterns from qemu-img, I can see that it is effectively using synchronous IO (i.e. only a single write is in-flight at a time), whereas "rbd import" will send up to 10 (by default) IO requests concurrently. Therefore, the higher the latencies to your cluster, the worse qemu-img will perform as compared to "rbd import".--On Thu, Jul 20, 2017 at 5:07 AM, Mahesh Jambhulkar <mahesh.jambhulkar@xxxxxxxxx> wrote:Adding rbd readahead disable after bytes = 0 did not help.[root@cephlarge mnt]# time qemu-img convert -p -O raw /mnt/data/workload_326e8a43-a90a-4fe9-8aab-6d33bcdf5a05/snap shot_9f0cee13-8200-4562-82ec-1 fb9f234bcd8/vm_id_05e9534e-5c8 4-4487-9613-1e0e227e4c1a/vm_ res_id_24291e4b-93d2-47ad-80a8 -bf3c395319b9_vdb/66582225- 6539-4e5e-9b7a-59aa16739df1 rbd:volumes/24291e4b-93d2-47ad -80a8-bf3c395319b9 (100.00/100%) real 4858m13.822suser 73m39.656ssys 32m11.891sIt took 80 hours to complete.Also, its not feasible to test this with huge 465GB file every time. So I tested qemu-img convert with a 20GB file.
Parameters Time taken -t writeback 38mins -t none 38 mins -S 4k 38 mins With client options mentions by Irek Fasikhov 40 mins The time taken is almost the same.--On Thu, Jul 13, 2017 at 6:40 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:On Thu, Jul 13, 2017 at 8:57 AM, Irek Fasikhov <malmyzh@xxxxxxxxx> wrote:
> rbd readahead disable after bytes = 0
There isn't any reading from an RBD image in this example -- plus
readahead disables itself automatically after the first 50MBs of IO
(i.e. after the OS should have had enough time to start its own
readahead logic).
--
Jason
Regards,mahesh jJason
Regards,
mahesh j
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com