Re: Low RBD Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 4 Feb 2014 01:29:18 +0000 Gruher, Joseph R wrote:

[snip, nice enough test setup]

> I notice in the FIO output despite the iodepth setting it seems to be
> reporting an IO depth of only 1, which would certainly help explain poor
> performance, but I'm at a loss as to why, I wonder if it could be
> something specific to RBD behavior, like I need to use a different IO
> engine to establish queue depth.
> 
> IO depths    : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
> >=64=0.0%
> 

This is definitely something with how you invoke fio, because when using
the iometer simulation I get:
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4%

/usr/share/doc/fio/examples/iometer-file-access-server.fio on Debian.
And that uses libaio as well.

Your Cosbench results sound about right, I get about 300 IOPS with the
above fio parameters, to a 2 node cluster with just 1 weak sauce, SSD less
OSD each and on 100Mb/s (yes, fast ether!) to boot. 
Clearly I'm less fortunate when it comes to hardware lying around for test
setups. ^o^

Also from your own company: ^o^
http://software.intel.com/en-us/blogs/2013/10/25/measure-ceph-rbd-performance-in-a-quantitative-way-part-i

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux