Re: Thumb rule for selecting memory for Ceph OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Sun, 13 Sep 2015 20:31:04 +0300 Vickey Singh wrote:

> Hello Guys
> 
> Doing hardware planning / selection for a new production Ceph cluster.
> Just wondering how should i select memory.
> 
> *I have found two different rules of selecting memory for Ceph OSD.( on
> Internet / googling / presentations )*
> 
> *#1    1GB / Ceph OSD  or 2GB / Ceph OSD ( for more performance )*
> 
> For Example : 12 OSD system it will be 12GB or 24GB. In this case don't
> the disk size matter ??
> 
Up to a point, but not much beyond that. I would never deploy anything
with less then 2GB RAM per OSD.

> 4TB Drive : 12 X 4 = 48TB Raw storage  (  is 24GB sufficient ?  )
> 6TB Drive : 12 X 6 = 72 TB Raw storage  ( is 24  GB Sufficient  ? )
>
Using large drives may look cheaper (denser systems), but Ceph performance
is tightly coupled to the amount of OSDs present.
How many nodes and OSDs do you plan to deploy initially and what are your
actual capacity needs?
 
> 
> *#2   1GB / 1TB of RAW capacity of system*
> 
> 4TB Drive : 12 X 4 = 48TB Raw storage  (is 48GB is more than enough ?
> 6TB Drive : 12 X 6 = 72 TB Raw storage  ( is 72GB is more than
> enough  ? )
>
That was written when 2TB HDDs were "huge".
A node of that size I would give at least 32GB, preferably 64GB (larger
page cache, better read performance).
 
> In case of Dense node
> 
> 72 x 6TB = 432TB Raw storage ( 432G memory seems to be HUGE investment ?
> )
> 
RAM is cheap these days, all things considered.

Again a dense node like that makes very little sense unless you know
exactly what you're doing and have a particular usage pattern (like
archival).
And I sure hope you're not thinking of the Supermicro 72 drive thingy,
because the double drive sleds are shared, so to replace a failed HDD you
need to pull out a good one as well.

With 72 OSDs you need 72GHz at least of CPU power, twice that if you use
SSD journals. Those CPUs will cost you more than the RAM.
What network do you plan to keep 72 OSDs busy?
And unless you deploy like 10 of them initially, a node of that size going
down will severely impact your cluster performance.

> 
> So which rule should we considered that can stand true for a 12 OSD node
> and even for 72 OSD node.
2GB per OSD plus OS/other needs, round up to whatever you can afford for
page cache.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux