Re: Ceph Hammer OSD Shard Tuning Test Results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>  This would be a good thing to bring up in the meeting on Wednesday.
>yes !
>

Yes, we can discuss details on Wed's call.


>
>>>I wonder how much effect flow-control and header/data crc had.
>yes. I known that sommath also disable crc for his bench
>

I disabled ceph's header/data crc for both simplemessenger & xio but 
didn't run with header/data crc enable to see the differences.


>
>>>Were the simplemessenger tests on IPoIB or native?
>
>I think it's native, as the Vu Pham benchmark was done on mellanox 
>sx1012 (ethernet).
>And xio messenger was on Roce (rdma over ethernet)
>
Yes, it's native for simplemessenger and RoCE for xio messenger


>
>>>How big was the RBD volume that was created (could some data be
>>>locally cached)? Did network data transfer statistics match the
>>>benchmark result numbers?
>
Single OSD on 4GB ramdisk, journal size is 256MB.

RBD volume is only 128MB; however, I ran fio_rbd client with direct=1 to 
bypass local buffer cache
Yes, the network data xfer statistics match the benchmark result 
numbers.
I used "dstat -N <ethX>" to monitor the network data statistics

I also turned all cores @ full speed and applied one parameter tuning 
for Mellanox ConnectX-3 HCA mlx4_core driver
(options mlx4_core  log_num_mgm_entry_size=-7)

$ cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
2601000

$ for c in ./cpu[0-55]*; do echo 2601000 > 
${c}/cpufreq/scaling_min_freq; done



>
>
>
>I @cc Vu pham to this mail maybe it'll be able to give us answer.
>
>
>Note that I'll have same mellanox switches (sx1012) for my production 
>cluster in some weeks,
>so I'll be able to reproduce the bench. (with 2x10 cores 3,1ghz nodes 
>and clients).
>
>
>
>
>
>----- Mail original -----
>De: "Mark Nelson" <mnelson@xxxxxxxxxx>
>À: "aderumier" <aderumier@xxxxxxxxx>
>Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" 
><ceph-users@xxxxxxxxxxxxxx>
>Envoyé: Lundi 2 Mars 2015 15:39:24
>Objet: Re:  Ceph Hammer OSD Shard Tuning Test Results
>
>Hi Alex,
>
>I see I even responded in the same thread! This would be a good thing
>to bring up in the meeting on Wednesday. Those are far faster single
>OSD results than I've been able to muster with simplemessenger. I
>wonder how much effect flow-control and header/data crc had. He did
>have quite a bit more CPU (Intel specs say 14 cores @ 2.6GHz, 28 if you
>count hyperthreading). Depending on whether there were 1 or 2 CPUs in
>that node, that might be around 3x the CPU power I have here.
>
>Some other thoughts: Were the simplemessenger tests on IPoIB or native?
>How big was the RBD volume that was created (could some data be
>locally cached)? Did network data transfer statistics match the
>benchmark result numbers?
>
>I also did some tests on fdcache, though just glancing at the results 
>it
>doesn't look like tweaking those parameters had much effect.
>
>Mark
>
>On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote:
>>  Hi Mark,
>>
>>  I found an previous bench from Vu Pham (it's was about 
>>simplemessenger vs xiomessenger)
>>
>>  http://www.spinics.net/lists/ceph-devel/msg22414.html
>>
>>  and with 1 osd, he was able to reach 105k iops with simple messenger
>>
>>  . ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)
>>
>>  this was with more powerfull nodes, but the difference seem to be 
>>quite huge
>>
>>
>>
>>  ----- Mail original -----
>>  De: "aderumier" <aderumier@xxxxxxxxx>
>>  À: "Mark Nelson" <mnelson@xxxxxxxxxx>
>>  Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" 
>><ceph-users@xxxxxxxxxxxxxx>
>>  Envoyé: Vendredi 27 Février 2015 07:10:42
>>  Objet: Re:  Ceph Hammer OSD Shard Tuning Test Results
>>
>>  Thanks Mark for the results,
>>  default values seem to be quite resonable indeed.
>>
>>
>>  I also wonder is cpu frequency can have an impact on latency or not.
>>  I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming 
>>weeks,
>>  I'll try replay your benchmark to compare
>>
>>
>>
>>  ----- Mail original -----
>>  De: "Mark Nelson" <mnelson@xxxxxxxxxx>
>>  À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" 
>><ceph-users@xxxxxxxxxxxxxx>
>>  Envoyé: Jeudi 26 Février 2015 05:44:15
>>  Objet:  Ceph Hammer OSD Shard Tuning Test Results
>>
>>  Hi Everyone,
>>
>>  In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance 
>>comparison
>>  thread, Alexandre DERUMIER wondered if changing the default shard and
>>  threads per shard OSD settings might have a positive effect on
>>  performance in our tests. I went back and used one of the PCIe SSDs
>>  from our previous tests to experiment with a recent master pull. I
>>  wanted to know how performance was affected by changing these 
>>parameters
>>  and also to validate that the default settings still appear to be 
>>correct.
>>
>>  I plan to conduct more tests (potentially across multiple SATA SSDs 
>>in
>>  the same box), but these initial results seem to show that the 
>>default
>>  settings that were chosen are quite reasonable.
>>
>>  Mark
>>
>>  _______________________________________________
>>  ceph-users mailing list
>>  ceph-users@xxxxxxxxxxxxxx
>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>  _______________________________________________
>>  ceph-users mailing list
>>  ceph-users@xxxxxxxxxxxxxx
>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>  --
>>  To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>>in
>>  the body of a message to majordomo@xxxxxxxxxxxxxxx
>>  More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux