Re: Ceph RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Michal, 

You can have a look at a thread I started a few days ago : http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-December/006494.html

I had some interrogations about performances as well and I think the explanations apply to your case.

Also, your SSD does not seem to be DC grade, which is recommended for Ceph for both performance and durability.

Adrien

On Mon, Dec 14, 2015 at 1:15 PM, Michał Chybowski <michal.chybowski@xxxxxxxxxxxx> wrote:
Hi

I've set up a small (5-node) cluster of Ceph. I'm trying to benchmark more real-life performance of ceph's block storage, but I'm seeing very weird (low) values of my benchmark setup.

My cluster consists of 5 nodes, every node has:
2 x 3TB HGST SATA drive
1x Samsung SM 841 120GB SSD for journal and system (3 partitions set on this drive, 1st is OS, 2nd and third are journals)
2 networks (public and cluster), both on 1 interface (IPoverInfiniBand on Mellanox InfiniHost III cards, up to 12Gbit/s)
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) (latest available as debian-infernalis)

I'm using command shown below to benchmark storage (both my existing solution and new ones, which could possibly be adopted to production):
fio --name=testrw --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --size=1024M --numjobs=5 --runtime=240 --group_reporting

Unfortunately, in ceph's RBD I'm getting only ~400 - 450 IOPS in this test.

My ceph.conf:
[global]
fsid = fcea4035-8a96-4e99-ae79-72e316197604
mon_initial_members = testing-1
mon_host = 10.31.7.21
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 10.31.7.21/24
cluster network = 10.32.7.21/24
osd pool default size = 2
osd pool default min size = 1

Is there anything I could do to at least get 10*1HDD performance on single RBD mapping?

--
Pozdrawiam
Michał Chybowski
Tiktalik.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
-----------------------------------------------------------------------------------------
Adrien GILLARD

+33 (0)6 29 06 16 31
gillard.adrien@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux