Hi Alex,
I see I even responded in the same thread! This would be a good thing
to bring up in the meeting on Wednesday. Those are far faster single
OSD results than I've been able to muster with simplemessenger. I
wonder how much effect flow-control and header/data crc had. He did
have quite a bit more CPU (Intel specs say 14 cores @ 2.6GHz, 28 if you
count hyperthreading). Depending on whether there were 1 or 2 CPUs in
that node, that might be around 3x the CPU power I have here.
Some other thoughts: Were the simplemessenger tests on IPoIB or native?
How big was the RBD volume that was created (could some data be
locally cached)? Did network data transfer statistics match the
benchmark result numbers?
I also did some tests on fdcache, though just glancing at the results it
doesn't look like tweaking those parameters had much effect.
Mark
On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote:
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs xiomessenger)
http://www.spinics.net/lists/ceph-devel/msg22414.html
and with 1 osd, he was able to reach 105k iops with simple messenger
. ~105k iops (4K random read, 20 cores used, numjobs=8, iopdepth=32)
this was with more powerfull nodes, but the difference seem to be quite huge
----- Mail original -----
De: "aderumier" <aderumier@xxxxxxxxx>
À: "Mark Nelson" <mnelson@xxxxxxxxxx>
Cc: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 27 Février 2015 07:10:42
Objet: Re: Ceph Hammer OSD Shard Tuning Test Results
Thanks Mark for the results,
default values seem to be quite resonable indeed.
I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
I'll try replay your benchmark to compare
----- Mail original -----
De: "Mark Nelson" <mnelson@xxxxxxxxxx>
À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Jeudi 26 Février 2015 05:44:15
Objet: Ceph Hammer OSD Shard Tuning Test Results
Hi Everyone,
In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
thread, Alexandre DERUMIER wondered if changing the default shard and
threads per shard OSD settings might have a positive effect on
performance in our tests. I went back and used one of the PCIe SSDs
from our previous tests to experiment with a recent master pull. I
wanted to know how performance was affected by changing these parameters
and also to validate that the default settings still appear to be correct.
I plan to conduct more tests (potentially across multiple SATA SSDs in
the same box), but these initial results seem to show that the default
settings that were chosen are quite reasonable.
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com