rbd performance issue - can't find bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We've been doing some testing of ceph hammer (0.94.2), but the performance is very slow and we can't find what's causing the problem.

Initially we've started with four nodes with 10 osd's total.
The drives we've used were SATA enterprise drives and on top of that we've used SSD drives as flashcache devices for SATA drives and for storing OSD's journal.

The local tests on each of the four nodes are giving the results you'd expect:

~500MB/s seq writes and reads from SSD's,
~40k iops random reads from SSD's,
~200MB/s seq writes and reads from SATA drives
~600 iops random reads from SATA drives

..but when we've tested this setup from a client we got rather slow results.. so we've tried to find a bottleneck and tested the network by connecting client to our nodes via NFS - and performance via NFS is as expected (similar results to local tests, only slightly slower).

So we've reconfigured ceph to not use SATA drives and just setup OSD's on SSD drives (we wanted to test if maybe this is a flashcache problem?)

..but to no success, the results of rbd i/o tests from two osd nodes setup on SSD drives are like this:

~60MB/s seq writes
~100MB/s seq reads
~2-3k iops random reads

The client is an rbd mounted on a linux ubuntu box. All the servers (osd nodes and the client) are running Ubuntu Server 14.04. We tried to switch to CentOS 7 - but the results are the same.

Here are some technical details about our setup:

Four exact same osd nodes:
E5-1630 CPU
32 GB RAM
Mellanox MT27520 56Gbps network cards
SATA controller LSI Logic SAS3008

Storage nodes are connected to SuperMicro chassis: 847E1C-R1K28JBOD

Four monitors (one on each node). We do not use CephFS so we do not run ceph-mds.

During the tests we were monitoring all osd nodes and the client - we haven't seen any problems on none of the hosts - load was low, there were no cpu waits, no abnormal system interrupts, no i/o problems on the disks - all the systems seemed to not sweat at all and yet the results are rather dissatisfying.. we're kinda lost, any help will be appreciated.

Cheers,
J

--
Jacek Jarosiewicz
Administrator Systemów Informatycznych

----------------------------------------------------------------------------------------
SUPERMEDIA Sp. z o.o. z siedzibą w Warszawie
ul. Senatorska 13/15, 00-075 Warszawa
Sąd Rejonowy dla m.st.Warszawy, XII Wydział Gospodarczy Krajowego Rejestru Sądowego,
nr KRS 0000029537; kapitał zakładowy 42.756.000 zł
NIP: 957-05-49-503
Adres korespondencyjny: ul. Jubilerska 10, 04-190 Warszawa

----------------------------------------------------------------------------------------
SUPERMEDIA ->   http://www.supermedia.pl
dostep do internetu - hosting - kolokacja - lacza - telefonia
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux