Hi,
Yeah, i have the ceph.conf on the real machine which VM located on.
A simple configuration, -:)
[global]
fsid = ***
mon_initial_members = *, *, *
mon_host = *, *, *
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
fsid = ***
mon_initial_members = *, *, *
mon_host = *, *, *
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
I change the configuration on line, let me post it:
"journal_queue_max_ops": "3000",
"journal_queue_max_bytes": "1048576000",
"journal_queue_max_bytes": "1048576000",
"journal_max_corrupt_search": "10485760",
"journal_max_write_bytes": "1048576000",
"journal_max_write_entries": "1000",
"journal_max_write_bytes": "1048576000",
"journal_max_write_entries": "1000",
"filestore_queue_max_ops": "500",
"filestore_queue_max_bytes": "104857600",
"filestore_queue_committing_max_ops": "5000",
"filestore_queue_committing_max_bytes": "1048576000",
"filestore_queue_max_bytes": "104857600",
"filestore_queue_committing_max_ops": "5000",
"filestore_queue_committing_max_bytes": "1048576000",
"filestore_max_inline_xattr_size": "254",
"filestore_max_inline_xattr_size_xfs": "65536",
"filestore_max_inline_xattr_size_btrfs": "2048",
"filestore_max_inline_xattr_size_other": "512",
"filestore_max_inline_xattrs": "6",
"filestore_max_inline_xattrs_xfs": "10",
"filestore_max_inline_xattrs_btrfs": "10",
"filestore_max_inline_xattrs_other": "2",
"filestore_max_alloc_hint_size": "1048576",
"filestore_max_sync_interval": "10",
"filestore_max_inline_xattr_size_xfs": "65536",
"filestore_max_inline_xattr_size_btrfs": "2048",
"filestore_max_inline_xattr_size_other": "512",
"filestore_max_inline_xattrs": "6",
"filestore_max_inline_xattrs_xfs": "10",
"filestore_max_inline_xattrs_btrfs": "10",
"filestore_max_inline_xattrs_other": "2",
"filestore_max_alloc_hint_size": "1048576",
"filestore_max_sync_interval": "10",
"osd_op_num_shards": "10",
But, anyway, from my test, the configuration impact less for the performance.
Btw, ceph version, 0.94.3 hammer
Thanks!
hzwulibin@xxxxxxxxx
From: Alexandre DERUMIERDate: 2015-10-21 17:12To: hzwulibinCC: ceph-usersSubject: Re: [performance] rbd kernel module versus qemu librbdcan you send me also your ceph.conf ?do you have a ceph.conf on the vms hosts too ?----- Mail original -----De: hzwulibin@xxxxxxxxxÀ: "aderumier" <aderumier@xxxxxxxxx>Cc: "ceph-users" <ceph-users@xxxxxxxx>Envoyé: Mercredi 21 Octobre 2015 10:31:56Objet: Re: [performance] rbd kernel module versus qemu librbdHi,let me post the version and configuration here first.host os: debian 7.8 kernel: 3.10.45guest os: debian 7.8 kernel: 3.2.0-4qemu version:ii ipxe-qemu 1.0.0+git-20131111.c3d1e78-2.1~bpo70+1 all PXE boot firmware - ROM images for qemuii qemu-kvm 1:2.1+dfsg-12~bpo70+1 amd64 QEMU Full virtualization on x86 hardwareii qemu-system-common 1:2.1+dfsg-12~bpo70+1 amd64 QEMU full system emulation binaries (common files)ii qemu-system-x86 1:2.1+dfsg-12~bpo70+1 amd64 QEMU full system emulation binaries (x86)ii qemu-utils 1:2.1+dfsg-12~bpo70+1 amd64 QEMU utilitiesvm config:<disk type='network' device='disk'><driver name='qemu' type='raw' cache='none'/><auth username='cinder'><secret type='ceph' uuid='****'/></auth><source protocol='rbd' name='*****'><host name='***' port='6789'/><host name='***' port='6789'/><host name='***' port='6789'/></source><target dev='vdf' bus='virtio'/><serial>*******</serial><address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0'/></disk>Thanks!hzwulibin@xxxxxxxxxFrom: Alexandre DERUMIERDate: 2015-10-21 14:01To: hzwulibinCC: ceph-usersSubject: Re: [performance] rbd kernel module versus qemu librbdDamn, that's a huge difference.What is your host os, guest os , qemu version and vm config ?As an extra boost, you could enable iothread on virtio disk.(It's available on libvirt but not on openstack yet).If it's a test server, maybe could you test it with proxmox 4.0 hypervisorhttps://www.proxmox.comI have made a lot of patch inside it to optimize rbd (qemu+jemalloc, iothreads,...)----- Mail original -----De: hzwulibin@xxxxxxxxxÀ: "aderumier" <aderumier@xxxxxxxxx>Cc: "ceph-users" <ceph-users@xxxxxxxx>Envoyé: Mercredi 21 Octobre 2015 06:11:20Objet: Re: Re: [performance] rbd kernel module versus qemu librbdHi,Thanks for you reply.I do more test here and things change more strange, now i only could get about 4k iops in VM:1. use fio with ioengine rbd to test the volume on the real machine[global]ioengine=rbdclientname=adminpool=vol_ssdrbdname=volume-4f4f9789-4215-4384-8e65-127a2e61a47frw=randwritebs=4kgroup_reporting=1[rbd_iodepth32]iodepth=32[rbd_iodepth1]iodepth=32[rbd_iodepth28]iodepth=32[rbd_iodepth8]iodepth=32could achive about 18k iops.2. test the same volume in VM, achive about 4.3k iops[global]rw=randwritebs=4kioengine=libaio#ioengine=synciodepth=128direct=1group_reporting=1thread=1filename=/dev/vdb[task1]iodepth=32[task2]iodepth=32[task3]iodepth=32[task4]iodepth=32Using cep osd perf to check the osd latency, all less than 1 ms.Using iostat to check the osd %util, about 10 in case 2 test.Using dstat to check VM status:----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--usr sys idl wai hiq siq| read writ| recv send| in out | int csw2 4 51 43 0 0| 0 17M| 997B 3733B| 0 0 |3476 69972 5 51 43 0 0| 0 18M| 714B 4335B| 0 0 |3439 69152 5 50 43 0 0| 0 17M| 594B 3150B| 0 0 |3294 66171 3 52 44 0 0| 0 18M| 648B 3726B| 0 0 |3447 69911 5 51 43 0 0| 0 18M| 582B 3208B| 0 0 |3467 7061Finally, using iptraf to check the package size in the VM, almost packages'ssize are around 1 to 70 and 71 to 140 bytes. That's different from real machine.But maybe iptraf on the VM can't prove anything, i check the real machine which the VM located on.It seems no abnormal.BTW, my VM is located on the ceph storage node.Anyone can give me more sugestions?Thanks!hzwulibin@xxxxxxxxxFrom: Alexandre DERUMIERDate: 2015-10-20 19:36To: hzwulibinCC: ceph-usersSubject: Re: [performance] rbd kernel module versus qemu librbdHi,I'm able to reach around same performance with qemu-librbd vs qemu-krbd,when I compile qemu with jemalloc(http://git.qemu.org/?p=qemu.git;a=commit;h=7b01cb974f1093885c40bf4d0d3e78e27e531363)on my test, librbd with jemalloc still use 2x more cpu than krbd,so cpu could be bottleneck too.with fasts cpu (3.1ghz), I'm able to reach around 70k iops 4k with rbd volume, both with krbd or librbd----- Mail original -----De: hzwulibin@xxxxxxxxxÀ: "ceph-users" <ceph-users@xxxxxxxx>Envoyé: Mardi 20 Octobre 2015 10:22:33Objet: [performance] rbd kernel module versus qemu librbdHi,I have a question about the IOPS performance for real machine and virtual machine.Here is my test situation:1. ssd pool (9 OSD servers with 2 osds on each server, 10Gb networks for public & cluster networks)2. volume1: use rbd create a 100G volume from the ssd pool and map to the real machine3. volume2: use cinder create a 100G volume form the ssd pool and atach to a guest host4. disable rbd cache5. fio test on the two volues:[global]rw=randwritebs=4kioengine=libaioiodepth=64direct=1size=64gruntime=300sgroup_reporting=1thread=1volume1 got about 24k IOPS and volume got about 14k IOPS.We could see performance of volume2 is not good compare to volume1, so is it normal behabior of guest host?If not, what maybe the problem?Thanks!hzwulibin@xxxxxxxxx_______________________________________________ceph-users mailing listceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com