Re: Slow RBD performance bs=4k

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/12/14 17:44, ceph.com@xxxxxxxxxxxxx wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA

Having 1 node different from the rest is not going to help...you will probably get better results if you sprinkle the SSD through all 3 nodes and use SATA for osd data and the SSD for osd journal.

Client1
All Cisco UCS running RHEL6.5 + kernel 3.18.0 + ceph 0.88.

A "dd bs=4k oflag=direct" test directly on a OSD disk shows me:
Node1 = 60MB/s
Node2 = 30MB/s
Node2 = 30MB/s


Hmmm - your SSD are slow for direct writes (15K IOPS if my maths is right - what make and model are they)? For that matter your SATA seem a bit pretty slow too (what make and model are they)?

And as Christian has mentioned, ceph small block size IO performance has been discussed at length previously, so it is worth searching the archives to understand the state of things and see that there has been *some* progress with improving this issue.

Cheers

Mark


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux