Re: Thoughts on proposed hardware configuration.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 11 Apr 2016 16:57:40 -0700 Brad Smith wrote:

> We're looking at implementing a 200+TB, 3 OSD-node Ceph cluster to be 
That's 72TB in your setup below and and 3 nodes are of course the bare
minimum, they're going to perform WORSE than an identical, single,
non-replicated node (latencies).
Once you grow the node number beyond your replication size (default 3),
things will speed up.

> accessed as a filesystem from research compute clusters and "data 
> transfer nodes" (from the Science DMZ network model... link 
> <https://meetings.internet2.edu/media/medialibrary/2015/10/05/20151005-dart-science-dmz-futures-v3.pdf>). 
> The goal is a first step to exploring what we can expect from Ceph in 
> this kind of roll...
> 
> Comments on the following configuration would be greatly appreciated!
> 
> Brad
> brad@xxxxxxxxxxxx
> 
> ##########
> 
>  > 1x Blade server - 4 server nodes in a 2U form factor:
>  >     ?    1x Ceph admin/Ceph monitor node
>  >     ?    2x Ceph monitor/Ceph metadata server node
> 
> 1 2U Four Node Server
> 6028TP-HTR
> 
> Mercury RM212Q 2U Quad-NodeServer:
> 1x Ceph Admin/Ceph Monitor Node:
> 2x Intel Xeon E5-2620v3 Six-CoreCPU's
Good enough, might be even better with less but faster cores.
Remember to give this node the lowest IP address to become the MON leader.

> 32GB's DDR4 ECC/REG memory
Depends on what kind of monitoring you're going to do there, but my
primary MON also runs graphite/apache and isn't using even 25% of the 16GB
RAM it has.
So definitely good enough.

> 2x 512GB SSD drives; Samsung 850 Pro
If you can afford it, use Samsung or Intel DC drives, simply so you'll
never have to worry about either performance or endurance.
That said, they should be good enough.

> 2x 10GbE DA/SFP+ ports
> 
> 2x Ceph Monitor/Ceph MetaData Nodes
> 2x Intel Xeon E5-2630v3 Eight-Core CPU's
> 64GB's DDR4 ECC/REG memory
Probably better with even more memory, given what people said in the very
recent "800TB - Ceph Physical Architecture Proposal" thread, read it.

> 2x 512GB SSD drives; Samsung 850 Pro
> 1x 64GB SATAdom
> 2x 10GbE DA/SFP+ ports
> 

[snip]
> 
>  > 3 Ceph OSD servers (70+TB each):
> 
> Quanta 1U 12-drive storage server
I'd stay with one vendor (Supermicro preferably), but that's me.

> D51PH-1ULH
> 
> Mercury RM112 1U Rackmount Server:
> 2x Intel Xeon E5-2630v3 procesors
> 64GB's DDR4 ECC/REG memory
Enough, but more RAM can be very beneficial when it comes to reads, both to
keep hot objects in the pagecache and inodes/etc in the SLAB space.

> 1x64GB SATAdom
That's for your OS one presumes, I'd hate having to shut down the server
to replace it and/or to then re-install things. 

> 2x 200GB Intel DC S3710 SSD's
If those where the sadly discontinued S3700's at 365MB/s write speed you'd
be only slightly below your estimated HDD speed of 840MB/s combined and
your network speed of 1GB/s.
I'd look into the 400GB model OR if you're happy with 3DWPD the 3610
model(s).

> 12x 6TB NL SAS drives
> 1x dual port 10 Gb EDA/SFP+ OCP network card
> 

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux