Re: RBD Block performance vs rbd mount as filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
i have adjusted to -  <driver name='qemu' type='raw' cache='writeback' io='threads'/>
it looks the situation keeps no big changes, the krbd looks keep getting much better peformance with dd, then i use ioping
--
librbd
 ioping -c 10 .
4 KiB from . (xfs /dev/mapper/centos-root): request=1 time=438 us
4 KiB from . (xfs /dev/mapper/centos-root): request=2 time=445 us
4 KiB from . (xfs /dev/mapper/centos-root): request=3 time=455 us
4 KiB from . (xfs /dev/mapper/centos-root): request=4 time=422 us
4 KiB from . (xfs /dev/mapper/centos-root): request=5 time=445 us
4 KiB from . (xfs /dev/mapper/centos-root): request=6 time=463 us
4 KiB from . (xfs /dev/mapper/centos-root): request=7 time=419 us
4 KiB from . (xfs /dev/mapper/centos-root): request=8 time=432 us
4 KiB from . (xfs /dev/mapper/centos-root): request=9 time=423 us
4 KiB from . (xfs /dev/mapper/centos-root): request=10 time=445 us

--- . (xfs /dev/mapper/centos-root) ioping statistics ---
10 requests completed in 9.01 s, 2.28 k iops, 8.90 MiB/s
min/avg/max/mdev = 419 us / 438 us / 463 us / 13 us

krbd
ioping -c 10 .
4 KiB from . (xfs /dev/mapper/centos-root): request=1 time=72 us
4 KiB from . (xfs /dev/mapper/centos-root): request=2 time=75 us
4 KiB from . (xfs /dev/mapper/centos-root): request=3 time=74 us
4 KiB from . (xfs /dev/mapper/centos-root): request=4 time=74 us
4 KiB from . (xfs /dev/mapper/centos-root): request=5 time=74 us
4 KiB from . (xfs /dev/mapper/centos-root): request=6 time=75 us
4 KiB from . (xfs /dev/mapper/centos-root): request=7 time=75 us
4 KiB from . (xfs /dev/mapper/centos-root): request=8 time=74 us
4 KiB from . (xfs /dev/mapper/centos-root): request=9 time=74 us
4 KiB from . (xfs /dev/mapper/centos-root): request=10 time=73 us

--- . (xfs /dev/mapper/centos-root) ioping statistics ---
10 requests completed in 9.00 s, 13.5 k iops, 52.8 MiB/s
min/avg/max/mdev = 72 us / 74 us / 75 us / 0 us
---
it looks the krbd got much better latency...

i am planning to build a VM based on qemu-kvm, i am thinkng which method i should use - 1) use librbd to host VM image directly with ceph, OR 2) use krbd and map the image via /dev/rdb0 to the host and place the VM with qcow2 inside the partition, it looks this method have better performance, but it looks this method is not the good method to use. as Ceph should be good for block device .. so i think for features and better to use method is with librbd directly with ceph. any ideas or comments?

thank you!

Bill

On Sun, Oct 30, 2016 at 4:02 PM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
>>we both VM use
>> <driver name='qemu' type='raw' cache='directsync' io='native'/>

Note that with librbd : directsync|none = rbd_cache=false  , writeback|writethrough = rbd_cache=true



>>and VM is unable to mount /dev/rbd0 directly to test the speed..
That's really strange...


>>and i think technically, librbd should be much beter performance than mouting /dev/rbd0.. but the actual test looks not the cases, anything i did wrongly, or any performance tuning >>required...

mmm,not sure. All past test always show little bit more performance with krbd.
The main bottleneck is that with librbd, qemu can be cpu limited (currently qemu can use 1thread by disk, so 1core, and with a lots of iops you could have better performance with krbd).

But in for your bench I don't think it's the case.

do you have tried to bench with "fio", doing more parallel thread, bigger queue depth ?
Maybe krbd has been latency here, and dd is a single stream, do it could impact resultw.



thank you!


----- Mail original -----
De: "Bill WONG" <wongahshuen@xxxxxxxxx>
À: "aderumier" <aderumier@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 28 Octobre 2016 17:58:42
Objet: Re: RBD Block performance vs rbd mount as filesystem

hi,
we both VM use
<driver name='qemu' type='raw' cache='directsync' io='native'/>
and VM is unable to mount /dev/rbd0 directly to test the speed..
and i think technically, librbd should be much beter performance than mouting /dev/rbd0.. but the actual test looks not the cases, anything i did wrongly, or any performance tuning required...
thank you!

Bill

On Fri, Oct 28, 2016 at 5:47 PM, Alexandre DERUMIER < [ mailto:aderumier@xxxxxxxxx | aderumier@xxxxxxxxx ] > wrote:


Hi,
do you have tried to enable cache=writeback when you use librbd ?

Could be interesting to see performance with using /dev/rbd0 in your vm, instead mounting a qcow2 inside.

----- Mail original -----
De: "Bill WONG" < [ mailto:wongahshuen@xxxxxxxxx | wongahshuen@xxxxxxxxx ] >
À: "ceph-users" < [ mailto:ceph-users@xxxxxxxxxx.com | ceph-users@xxxxxxxxxxxxxx ] >
Envoyé: Vendredi 28 Octobre 2016 10:24:50
Objet: RBD Block performance vs rbd mount as filesystem

Hi All,
we have build a Ceph with 72 OSDs, replica 3, all working fine. we have done some performance testing. we found a very interesting issue.
we have a KVM + Libvirt + Ceph setup

Case 1. KVM + Libvirt + Ceph w/ rbd backend
The kvm hypervisor node create a VM and use rbd block device as storage backend. we do dd fdatasync, it got ~500MB/s

Case 2. KVM + Libvirt + Ceph
The KVM hypervisor node mount the Ceph storage pool directly as local partition eg. mount /dev/rbd0 /mnt/VM_pool, we we create VM with format qcow2 and place the vm disk under that partition, and do the same dd with fdatasync, it got ~850MB/s

it is tested within same hypervisor node with same VM configurations. why the directory mount rbd0 to hypervisor filesystem as partition will be much good performance?..any idea on this ?

thank you!

---- KVM Ceph VM disk setting --
<disk type='network' device='disk'>
<source protocol='rbd' name='VM_pool/VM1.img' >
<host name='mon1' port='6789'/>
<host name='mon2' port='6789'/>
<host name='mon3' port='6789'/>
</source>
<auth username='libvirt' type='ceph'>
<secret type='ceph' uuid='856b660e-ce4e-4a91-a7be-f17e469024c5'/>
</auth>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>

-- KVM VM disk created at Ceph rbd0 partition
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/VM_Pool/CentOS1.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>

=================================

Bill

_______________________________________________
ceph-users mailing list
[ mailto:ceph-users@xxxxxxxxxx.com | ceph-users@xxxxxxxxxxxxxx ]
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ]






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux