Hi Alex,
Thanks! I didn't tweak the sharding settings at all, so they are just
at the default values:
OPTION(osd_op_num_threads_per_shard, OPT_INT, 2)
OPTION(osd_op_num_shards, OPT_INT, 5)
I don't have really good insight yet into how tweaking these would
affect single-osd performance. I know the PCIe SSDs do have multiple
controllers on-board so perhaps increasing the number of shards would
improve things, but I suspect that going too high could maybe start
hurting performance as well. Have you done any testing here? It could
be an interesting follow-up paper.
Mark
On 02/18/2015 02:34 AM, Alexandre DERUMIER wrote:
Nice Work Mark !
I don't see any tuning about sharding in the config file sample
(osd_op_num_threads_per_shard,osd_op_num_shards,...)
as you only use 1 ssd for the bench, I think it should improve results for hammer ?
----- Mail original -----
De: "Mark Nelson" <mnelson@xxxxxxxxxx>
À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 17 Février 2015 18:37:01
Objet: Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
Hi All,
I wrote up a short document describing some tests I ran recently to look
at how SSD backed OSD performance has changed across our LTS releases.
This is just looking at RADOS performance and not RBD or RGW. It also
doesn't offer any real explanations regarding the results. It's just a
first high level step toward understanding some of the behaviors folks
on the mailing list have reported over the last couple of releases. I
hope you find it useful.
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com