> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Christian Balzer > Sent: 20 January 2016 10:31 > To: ceph-users@xxxxxxxx > Subject: Re: SSD OSDs - more Cores or more GHz > > > Hello, > > On Wed, 20 Jan 2016 10:01:19 +0100 Götz Reinicke - IT Koordinator wrote: > > > Hi folks, > > > > we plan to use more ssd OSDs in our first cluster layout instead of > > SAS osds. (more IO is needed than space) > > > > short question: What would influence the performance more? more Cores > > or more GHz/Core. > > > > Or is it as always: Depeds on the total of OSDs/nodes/repl-level/etc > > ... :) > > > > While there certainly is a "depends" in there, my feeling is that faster cores > are more helpful than many, slower ones I would say it depends on if your objective is to get as much IO out of the SSD at high queue depths or if you need very low latency at low queue depths. For the former, more cores is better as you can spread the requests over all the cores. The later needs very fast clock speeds. Maybe something like a Xeon E3 4x 3.6hz with 1 or two SSD's per node. Of course there are chips with lots of cores and reasonably fast clock speeds, but expect to pay a lot for them. . > And this is how I spec'ed my first SSD nodes, 1 fast core (Intel, thus 2 > pseudo-cores) per OSD. > The reasoning is simple, an individual OSD thread will run (hopefully) on one > core and thus be faster, with less latency(!). > > > If needed, I can give some more detailed information on the layout. > > > Might be interesting for other sanity checks, if you don't mind. > > Regards, > > Christian > -- > Christian Balzer Network/Systems Engineer > chibi@xxxxxxx Global OnLine Japan/Rakuten Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com