Ceph RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I've set up a small (5-node) cluster of Ceph. I'm trying to benchmark more real-life performance of ceph's block storage, but I'm seeing very weird (low) values of my benchmark setup.

My cluster consists of 5 nodes, every node has:
2 x 3TB HGST SATA drive
1x Samsung SM 841 120GB SSD for journal and system (3 partitions set on this drive, 1st is OS, 2nd and third are journals) 2 networks (public and cluster), both on 1 interface (IPoverInfiniBand on Mellanox InfiniHost III cards, up to 12Gbit/s) ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) (latest available as debian-infernalis)

I'm using command shown below to benchmark storage (both my existing solution and new ones, which could possibly be adopted to production): fio --name=testrw --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --direct=1 --size=1024M --numjobs=5 --runtime=240 --group_reporting

Unfortunately, in ceph's RBD I'm getting only ~400 - 450 IOPS in this test.

My ceph.conf:
[global]
fsid = fcea4035-8a96-4e99-ae79-72e316197604
mon_initial_members = testing-1
mon_host = 10.31.7.21
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 10.31.7.21/24
cluster network = 10.32.7.21/24
osd pool default size = 2
osd pool default min size = 1

Is there anything I could do to at least get 10*1HDD performance on single RBD mapping?

--
Pozdrawiam
Michał Chybowski
Tiktalik.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux