Haomai, Sure , I will change ' ms_event_op_threads' and update my findings. Alexandre, Here is my config.. [global] filestore_xattr_use_omap = true debug_lockdep = 0/0 debug_context = 0/0 debug_crush = 0/0 debug_buffer = 0/0 debug_timer = 0/0 debug_filer = 0/0 debug_objecter = 0/0 debug_rados = 0/0 debug_rbd = 0/0 debug_journaler = 0/0 debug_objectcatcher = 0/0 debug_client = 0/0 debug_osd = 0/0 debug_optracker = 0/0 debug_objclass = 0/0 debug_filestore = 0/0 debug_journal = 0/0 debug_ms = 0/0 debug_monc = 0/0 debug_tp = 0/0 debug_auth = 0/0 debug_finisher = 0/0 debug_heartbeatmap = 0/0 debug_perfcounter = 0/0 debug_asok = 0/0 debug_throttle = 0/0 debug_mon = 0/0 debug_paxos = 0/0 debug_rgw = 0/0 osd_op_threads = 2 osd_op_num_threads_per_shard = 2 osd_op_num_shards = 12 filestore_op_threads = 4 ms_nocrc = true filestore_fd_cache_size = 100000 filestore_fd_cache_shards = 10000 cephx sign messages = false cephx require signatures = false ms_dispatch_throttle_bytes = 0 throttler_perf_counter = false osd_pool_default_size = 1 osd_pool_default_min_size = 1 filestore_wbthrottle_enable = false [osd] osd_journal_size = 150000 # Execute $ hostname to retrieve the name of your host, # and replace {hostname} with the name of your host. # For the monitor, replace {ip-address} with the IP # address of your host. osd_client_message_size_cap = 0 osd_client_message_cap = 0 osd_enable_op_tracker = false Thanks & Regards Somnath -----Original Message----- From: Haomai Wang [mailto:haomaiwang@xxxxxxxxx] Sent: Saturday, October 18, 2014 10:15 PM To: Somnath Roy Cc: ceph-devel@xxxxxxxxxxxxxxx Subject: Re: The Async messenger benchmark with latest master Thanks Somnath! I have another simple performance test for async messenger: For 4k object read, master branch used 4.46s to complete tests, async Messenger used 3.14s For 4k object write, master branch used 10.6s to complete, async Messenger used 6.6s!! Detailf results see below, 4k object read is a simple ceph client program which will read 5000 objects and 4k object write will write 5000 objects. I increased "ms_event_op_threads" to 10 compared to the default value is 2. Maybe Somnath can do it and tests again, I think we can get more improvements for your tests. Master Branch(6fa686c8c42937dd069591f16de92e954d8ed34d): [root@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done Fri Oct 17 17:10:39 UTC 2014 Used Time:4.461581 Fri Oct 17 17:10:44 UTC 2014 Used Time:4.388572 Fri Oct 17 17:10:48 UTC 2014 Used Time:4.448157 [root@ceph-test src]# [root@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; doneFri Oct 17 17:11:23 UTC 2014 Used Time:10.638783 Fri Oct 17 17:11:33 UTC 2014 Used Time:10.793231 Fri Oct 17 17:11:44 UTC 2014 Used Time:10.908003 Master Branch with AsyncMessenger: [root@ceph-test src]# for i in `seq 1 3`; do date && ~/08.rados_read_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done Sun Oct 19 06:01:50 UTC 2014 Used Time:3.155506 Sun Oct 19 06:01:53 UTC 2014 Used Time:3.134961 Sun Oct 19 06:01:56 UTC 2014 Used Time:3.135814 [root@ceph-test src]# for i in `seq 1 3`; do date && ~/01.rados_write_4k_for_each_object /etc/ceph/ceph.conf rbd 5000 && sleep 3; done Sun Oct 19 06:02:03 UTC 2014 Used Time:6.536319 Sun Oct 19 06:02:10 UTC 2014 Used Time:6.648738 Sun Oct 19 06:02:16 UTC 2014 Used Time:6.585156 On Sat, Oct 18, 2014 at 4:37 AM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote: > Hi Sage/Haomai, > > I did some 4K Random Read benchmarking with latest master having Async messenger changes and result looks promising. > > My configuration: > --------------------- > > 1 node, 8 SSDs, 8OSDs, 3 pools , each has 3 images. ~2000 Pg cluster > wide Cpu : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, dual socket, HT enabled, 40 cores. > Used krbd as client. > > > 1 clinet node with 3 rbd images on 3 different pools: > ------------------------------------------------------------------- > Master : > --------- > > ~203k IOPS, ~90% latencies within 4msec, total read : ~5.2TB > > lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.03% > lat (usec) : 500=0.86%, 750=3.72%, 1000=7.19% > lat (msec) : 2=39.17%, 4=41.45%, 10=7.26%, 20=0.24%, 50=0.06% > lat (msec) : 100=0.03%, 250=0.01%, 500=0.01% > > cpu: ~1-2 % idle > > Giant: > ------ > > ~196k IOPS, ~89-90% latencies within 4msec, total read : ~5.3TB > > lat (usec) : 250=0.01%, 500=0.51%, 750=2.71%, 1000=6.03% > lat (msec) : 2=34.32%, 4=45.74%, 10=10.46%, 20=0.21%, 50=0.02% > lat (msec) : 100=0.01%, 250=0.01% > > cpu: ~2% idle > > 2 clients with 3 rbd images each on 3 different pools: > ------------------------------------------------------------------- > > Master : > --------- > > ~207K iops, ~70% latencies within 4msec, Total read: ~5.99 TB > > lat (usec) : 250=0.03%, 500=0.63%, 750=2.67%, 1000=5.21% > lat (msec) : 2=25.12%, 4=36.19%, 10=24.80%, 20=3.16%, 50=1.34% > lat (msec) : 100=0.66%, 250=0.18%, 500=0.01% > > cpu: ~0-1 % idle > > Giant: > -------- > ~199K iops, ~64% latencies within 4msec, Total read: ~5.94 TB > > lat (usec) : 250=0.01%, 500=0.25%, 750=1.47%, 1000=3.45% > lat (msec) : 2=21.22%, 4=36.69%, 10=30.63%, 20=4.28%, 50=1.70% > lat (msec) : 100=0.28%, 250=0.02%, 500=0.01% > > cpu: ~1% idle > > > So, in summary the master with Async messenger has improved both in iops and latency. > > Thanks & Regards > Somnath > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo > info at http://vger.kernel.org/majordomo-info.html -- Best Regards, Wheat ��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f