This is very true, but do you actually exclusively pin the cores to the OSD daemons so they don't interfere? I don't think may people do that, it wouldn't work with more than a handful of OSDs. The OSD might typicaly only need <100% of one core, but during startup or some reshuffling it's beneficial to allow it to get more (>400%), and that will interfere with whatever else was pinned there... Jan > On 20 Jan 2016, at 13:07, Oliver Dzombic <info@xxxxxxxxxxxxxxxxx> wrote: > > Hi, > > Cores > Frequency > > If you think about recovery / scrubbing tasks its better when a cpu core > can be assigned to do this. > > Compared to a situation where the same cpu core needs to recovery/scrub > and still deliver the productive content at the same time. > > The more you can create a situation where an osd has its "own" cpu core, > the better it is. Modern CPU's are anyway so fast, that even SSDs cant > run the CPU's to their limit. > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:info@xxxxxxxxxxxxxxxxx > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 20.01.2016 um 10:01 schrieb Götz Reinicke - IT Koordinator: >> Hi folks, >> >> we plan to use more ssd OSDs in our first cluster layout instead of SAS >> osds. (more IO is needed than space) >> >> short question: What would influence the performance more? more Cores or >> more GHz/Core. >> >> Or is it as always: Depeds on the total of OSDs/nodes/repl-level/etc ... :) >> >> If needed, I can give some more detailed information on the layout. >> >> Thansk for feedback . Götz >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com