Hi Jiangang, I managed to get some data for you but it's for a 3 node cluster. I will try to get data for single node as well. Test config: ------------- Cluster and rbd node config: ---------------------------------- "2x E5-2680 10C 2.8GHz 25M 8x 16GB RDIMM, dual rank x4 (128GB) Mellanox MT27500 40 Gigabit Ethernet LSI 9207 SAS HBA" 8 X 800 GB SSDs (Optimus Eco) per cluster node 3 cluster nodes + 3 rbd nodes Total storage ~ 19 TB We have total 24 OSDs running , each node has 8 OSDs/SSD Configured 3 pools with 528 PGs/pool and 6 RBDs/pool . Each RBD image size is ~230G. We have tried on 64K_RR_QD64 workload here. HT_ENABLE -------------- IOPS : 112500 Throughput (MB/S): 7012 Avg Resp.Time (m.sec): 17 Max Resp.Time (m.sec): 3184 HT_DISABLE -------------- IOPS : 120864 Throughput (MB/S): 7530 Avg Resp.Time (m.sec): 11 Max Resp.Time (m.sec): 1056 So, ~7% iop increase but response time decrease is ~35% which is real good. Thanks & Regards Somnath -----Original Message----- From: Duan, Jiangang [mailto:jiangang.duan@xxxxxxxxx] Sent: Wednesday, October 08, 2014 1:03 PM To: Somnath Roy; Andreas Bluemle; ceph-devel@xxxxxxxxxxxxxxx Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params Sound good. Thanks. -jiangang -----Original Message----- From: Somnath Roy [mailto:Somnath.Roy@xxxxxxxxxxx] Sent: Wednesday, October 08, 2014 10:53 AM To: Duan, Jiangang; Andreas Bluemle; ceph-devel@xxxxxxxxxxxxxxx Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params Hi Jiangang, Give me a day or two, I will gather all the data and share with community. Thanks & Regards Somnath -----Original Message----- From: Duan, Jiangang [mailto:jiangang.duan@xxxxxxxxx] Sent: Wednesday, October 08, 2014 10:47 AM To: Somnath Roy; Andreas Bluemle; ceph-devel@xxxxxxxxxxxxxxx Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params Can you guys share the w/ HT and w/o HT data? I want to take a look at that to understand why. -jiangang -----Original Message----- From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Somnath Roy Sent: Wednesday, October 08, 2014 10:38 AM To: Andreas Bluemle; ceph-devel@xxxxxxxxxxxxxxx Subject: RE: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params Thanks Andres for sharing this. I will try those out. BTW, I am using Ubuntu 14.04 LTS and couldn't find any sysfs entry like 'cpufreq'.. root@stormeap-4:~# ll /sys/devices/system/cpu/cpu10/ cache/ crash_notes driver/ microcode/ online subsystem/ topology/ cpuidle/ crash_notes_size firmware_node/ node0/ power/ thermal_throttle/ uevent I am using Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz. Regards Somnath -----Original Message----- From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Andreas Bluemle Sent: Wednesday, October 08, 2014 9:33 AM To: ceph-devel@xxxxxxxxxxxxxxx Subject: Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params Hi, as mentioned during today's meeting, here are the kernel boot parameters which I found to provide the basis for good performance results: processor.max_cstate=0 intel_idle.max_cstate=0 I understand these to basically turn off any power saving modes of the CPU; the CPU's we are using are like Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz At the BIOS level, we - turn off Hyperthraeding - turn off Turbo mode (in order ot not leave the specifications) - turn on frequency floor override We also assert that /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor is set to "performance" Using above we see a constant frequency at the maximum level allowed by the CPU (except Turbo mode). Best Regards Andreas Bluemle On Wed, 8 Oct 2014 02:51:21 +0200 Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote: > Hi All, > > Just a remind that the weekly performance meeting is on Wednesdays at > 8AM PST. Same bat time, same bat channel! > > Etherpad URL: > http://pad.ceph.com/p/performance_weekly > > To join the Meeting: > https://bluejeans.com/268261044 > > To join via Browser: > https://bluejeans.com/268261044/browser > > To join with Lync: > https://bluejeans.com/268261044/lync > > > To join via Room System: > Video Conferencing System: bjn.vc -or- 199.48.152.152 Meeting ID: > 268261044 > > To join via Phone: > 1) Dial: > +1 408 740 7256 > +1 888 240 2560(US Toll Free) > +1 408 317 9253(Alternate Number) > (see all numbers - http://bluejeans.com/numbers) > 2) Enter Conference ID: 268261044 > > Mark > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo > info at http://vger.kernel.org/majordomo-info.html > > -- Andreas Bluemle mailto:Andreas.Bluemle@xxxxxxxxxxx ITXperts GmbH http://www.itxperts.de Balanstrasse 73, Geb. 08 Phone: (+49) 89 89044917 D-81541 Muenchen (Germany) Fax: (+49) 89 89044910 Company details: http://www.itxperts.de/imprint.htm -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html ________________________________ PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html