Slow IOPS on RBD compared to journal and backing devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 7, 2014 at 5:57 PM, Christian Balzer <chibi at gol.com> wrote:
>
> Hello,
>
> ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The journals
> are on (separate) DC 3700s, the actual OSDs are RAID6 behind an Areca 1882
> with 4GB of cache.
>
> Running this fio:
>
> fio --size=400m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1 --rw=randwrite --name=fiojob --blocksize=4k --iodepth=128
>
> results in:
>
>   30k  IOPS on the journal SSD (as expected)
>  110k  IOPS on the OSD (it fits neatly into the cache, no surprise there)
> 3200   IOPS from a VM using userspace RBD
> 2900   IOPS from a host kernelspace mounted RBD
>
> When running the fio from the VM RBD the utilization of the journals is
> about 20% (2400 IOPS) and the OSDs are bored at 2% (1500 IOPS after some
> obvious merging).
> The OSD processes are quite busy, reading well over 200% on atop, but
> the system is not CPU or otherwise resource starved at that moment.
>
> Running multiple instances of this test from several VMs on different hosts
> changes nothing, as in the aggregated IOPS for the whole cluster will
> still be around 3200 IOPS.
>
> Now clearly RBD has to deal with latency here, but the network is IPoIB
> with the associated low latency and the journal SSDs are the
> (consistently) fasted ones around.
>
> I guess what I am wondering about is if this is normal and to be expected
> or if not where all that potential performance got lost.

Hmm, with 128 IOs at a time (I believe I'm reading that correctly?)
that's about 40ms of latency per op (for userspace RBD), which seems
awfully long. You should check what your client-side objecter settings
are; it might be limiting you to fewer outstanding ops than that. If
it's available to you, testing with Firefly or even master would be
interesting ? there's some performance work that should reduce
latencies.

But a well-tuned (or even default-tuned, I thought) Ceph cluster
certainly doesn't require 40ms/op, so you should probably run a wider
array of experiments to try and figure out where it's coming from.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux