Re: Cores/Memory/GHz recommendation for SSD based OSD servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm probably going to get shot down for saying this...but here goes.

As a very rough guide, think of it more as you need around 10Mhz for every IO, whether that IO is 4k or 4MB it uses roughly the same amount of CPU, as most of the CPU usage is around ceph data placement rather than the actual read/writes to disk.

I can nearly saturate 12x2.1ghz cores with a single SSD, doing 4k ios at high queue depths.

Which brings us back to your original question, rather than asking how much CPU for x amount of SSD's. How many IOs do you require out your cluster?

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Mark Nelson
> Sent: 02 April 2015 13:26
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Cores/Memory/GHz recommendation for SSD
> based OSD servers
> 
> It's probably more a question of IOPS unless you have really slow SSDs. :)
> 
> Mark
> 
> On 04/02/2015 07:22 AM, Sreenath BH wrote:
> > We have the model with 25 disks per node.
> >
> > We have just two 10G network interfaces per node. Does that not limit
> > the thgouthput and hence the load on the CPUs?
> >
> > -Sreenath
> >
> > On 4/2/15, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
> >> Hi,
> >>
> >>>> with HP SL4540 server?
> >>
> >> this model
> >>
> http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=c04128155
> >> ?
> >>
> >> (60 drives ?)
> >>
> >> I think for a full ssd node, it'll be impossible to reach max
> >> performance, you'll be cpu bound.
> >>
> >>
> >> I think a small node with 6-8 ssd osd  for 20cores should be ok.
> >>
> >>
> >> ----- Mail original -----
> >> De: "Sreenath BH" <bhsreenath@xxxxxxxxx>
> >> À: "Christian Balzer" <chibi@xxxxxxx>
> >> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> >> Envoyé: Jeudi 2 Avril 2015 11:55:52
> >> Objet: Re:  Cores/Memory/GHz recommendation for SSD
> based
> >> OSD	servers
> >>
> >> Thanks for the tips.
> >> Could anyone share their experience building a SSD pool or a SSD
> >> cache tier with HP SL4540 server?
> >>
> >> rgds,
> >> Sreenath
> >>
> >> On 4/2/15, Christian Balzer <chibi@xxxxxxx> wrote:
> >>>
> >>> Hello,
> >>>
> >>> On Wed, 1 Apr 2015 18:40:10 +0530 Sreenath BH wrote:
> >>>
> >>>> Hi all,
> >>>>
> >>>> we are considering building all SSD OSD servers for RBD pool.
> >>>>
> >>> I'd advise you to spend significant time reading the various threads
> >>> in this ML about SSD based pools.
> >>> Both about the current shortcomings and limitations of SSD pools and
> >>> which
> >>>
> >>> SSDs to (not) use.
> >>>
> >>>> Couple of questions:
> >>>>
> >>>> Does Ceph have any recommendation for number of
> cores/memory/ghz
> >>>> per SSD drive, similar to what is usually followed for hard
> >>>> drives(1
> >>>> core/1 GB Ram/1Ghz speed)?
> >>>>
> >>> Note that that 1GHz core per OSD only applies with pure HDD OSDs,
> >>> once a journal SSD enters the picture you're likely to want 2-3 times that.
> >>>
> >>> You probably don't want to try this with anything less than the
> >>> upcoming Hammer release, but even with that the current rule for SSD
> >>> based
> >>>
> >>> pools is "the fastest cores you can afford and as many as possible".
> >>> And given the right loads, small write IOPS basically, you're
> >>> probably still going to be CPU bound.
> >>>
> >>> RAM is the same as with HDD based OSDs, but given how much more
> RAM
> >>> helps
> >>>
> >>> I would advise at least 2GB per OSD and as much as you can afford.
> >>>
> >>> Regards,
> >>>
> >>> Christian
> >>>> thanks,
> >>>> Sreenath
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@xxxxxxxxxxxxxx
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>>
> >>>
> >>>
> >>> --
> >>> Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global
> >>> OnLine Japan/Fusion Communications http://www.gol.com/
> >>>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux