Re: Ceph Hammer OSD Shard Tuning Test Results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can I ask what xio and simple messenger are and the differences?

Kind regards

Kevin Walker
+968 9765 1742

On 1 Mar 2015, at 18:38, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:

Hi Mark,

I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger)

http://www.spinics.net/lists/ceph-devel/msg22414.html

and with 1 osd, he was able to reach 105k iops with simple messenger

. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)

this was with more powerfull nodes, but the difference seem to be quite huge



----- Mail original -----
De: "aderumier" <aderumier@xxxxxxxxx>
À: "Mark Nelson" <mnelson@xxxxxxxxxx>
Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 27 Février 2015 07:10:42
Objet: Re:  Ceph Hammer OSD Shard Tuning Test Results

Thanks Mark for the results, 
default values seem to be quite resonable indeed. 


I also wonder is cpu frequency can have an impact on latency or not. 
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks, 
I'll try replay your benchmark to compare 



----- Mail original ----- 
De: "Mark Nelson" <mnelson@xxxxxxxxxx> 
À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx> 
Envoyé: Jeudi 26 Février 2015 05:44:15 
Objet:  Ceph Hammer OSD Shard Tuning Test Results 

Hi Everyone, 

In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison 
thread, Alexandre DERUMIER wondered if changing the default shard and 
threads per shard OSD settings might have a positive effect on 
performance in our tests. I went back and used one of the PCIe SSDs 
from our previous tests to experiment with a recent master pull. I 
wanted to know how performance was affected by changing these parameters 
and also to validate that the default settings still appear to be correct. 

I plan to conduct more tests (potentially across multiple SATA SSDs in 
the same box), but these initial results seem to show that the default 
settings that were chosen are quite reasonable. 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux