Re: rbd performance issue - can't find bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/17/2015 03:38 PM, Alexandre DERUMIER wrote:
Hi,

can you post your ceph.conf ?


sure:

[global]
fsid = e96fdc70-4f9c-4c12-aae8-63dd7c64c876
mon initial members = cf01,cf02
mon host = 10.4.10.211,10.4.10.212
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true
public network = 10.4.10.0/24
#cluster network = 192.168.10.0/24
osd journal size = 10240
#journal dio = false
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 512
osd pool default pgp num = 512
osd crush chooseleaf type = 1

[mon.cf01]
host = cf01
mon addr = 10.4.10.211:6789

[mon.cf02]
host = cf02
mon addr = 10.4.10.212:6789

[osd.0]
host = cf01

[osd.1]
host = cf02


Which tools do you use for benchmark ?
which block size, iodepth, number of client/rbd volume do you use ?


I use fio for random reads and dd for seq reads and writes.
Block size is 4k (fs on the osd is XFS). I used iodepths 1,4,16,32 - the more io in queue the worse performance. The results I posted in my message are from fio command run like this:

fio --name=randread --numjobs=1 --rw=randread --bs=4k --size=10G --filename=test10g --direct=1


Is it with krbd kernel driver ?
(I have seen some bad performance with kernel 3.16, but at much higher rate (100k iops)
Is it with ethernet switches ? or ip over infiniband ?


kernel driver, kernel version: 3.10.0-229.4.2.el7.x86_64 (last tests were on CentOS 7.1, when we used Ubuntu - kernel version was 3.13.0-53-generic)

We use ethernet switches (mellanox msx1012). Switches are configured with MLAG and we use mellanox dual port 56Gbps cards with bond interfaces configured as round-robin.

your results seem quite low anyway.


yes.. :(

I'm also using ethernet mellanox switchs (10GBE), sas3008 (dell r630).
and I can reach around 250kiops randread 4K with 1osd (with 80% usage of 2x10cores 3,1ghz)



here my ceph.conf
-----------------
[global]
fsid = ....
public_network =....
mon_initial_members = ...
mon_host =.....
auth_cluster_required = none
auth_service_required = none
auth_client_required = none
filestore_xattr_use_omap = true
osd_pool_default_min_size = 1
debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_journaler = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
osd_op_threads = 5
filestore_op_threads = 4
osd_op_num_threads_per_shard = 2
osd_op_num_shards = 10
filestore_fd_cache_size = 64
filestore_fd_cache_shards = 32
ms_nocrc = true
ms_dispatch_throttle_bytes = 0
cephx_sign_messages = false
cephx_require_signatures = false
throttler_perf_counter = false
ms_crc_header = false
ms_crc_data = false

[osd]
osd_client_message_size_cap = 0
osd_client_message_cap = 0
osd_enable_op_tracker = false


(main boost are disable cephx_auth, debug, and increase thread sharding)

Will try Your suggested config and let You know, thanks!

J

--
Jacek Jarosiewicz
Administrator Systemów Informatycznych

----------------------------------------------------------------------------------------
SUPERMEDIA Sp. z o.o. z siedzibą w Warszawie
ul. Senatorska 13/15, 00-075 Warszawa
Sąd Rejonowy dla m.st.Warszawy, XII Wydział Gospodarczy Krajowego Rejestru Sądowego,
nr KRS 0000029537; kapitał zakładowy 42.756.000 zł
NIP: 957-05-49-503
Adres korespondencyjny: ul. Jubilerska 10, 04-190 Warszawa

----------------------------------------------------------------------------------------
SUPERMEDIA ->   http://www.supermedia.pl
dostep do internetu - hosting - kolokacja - lacza - telefonia
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux