We're looking at implementing a 200+TB, 3 OSD-node Ceph cluster to be
accessed as a filesystem from research compute clusters and "data
transfer nodes" (from the Science DMZ network model... link).
The goal is a first step to exploring what we can expect from Ceph in
this kind of roll...
Comments on the following configuration would be greatly appreciated! Brad brad@xxxxxxxxxxxx ########## > 1x Blade server - 4 server nodes in a 2U form factor: > ◦ 1x Ceph admin/Ceph monitor node > ◦ 2x Ceph monitor/Ceph metadata server node 1 2U Four Node Server 6028TP-HTR Mercury RM212Q 2U Quad-NodeServer: 1x Ceph Admin/Ceph Monitor Node: 2x Intel Xeon E5-2620v3 Six-CoreCPU's 32GB's DDR4 ECC/REG memory 2x 512GB SSD drives; Samsung 850 Pro 2x 10GbE DA/SFP+ ports 2x Ceph Monitor/Ceph MetaData Nodes 2x Intel Xeon E5-2630v3 Eight-Core CPU's 64GB's DDR4 ECC/REG memory 2x 512GB SSD drives; Samsung 850 Pro 1x 64GB SATAdom 2x 10GbE DA/SFP+ ports Four Hot-Pluggable Systems (Nodes) in a 2U Form Factor. Each Node Supports the Following: Dual Socket R (LGA 2011) Supports Intel Xeon Procesor E5-2600v3 Family; QPI up to 9.6GT/s Up to 1TB ECC LRDIMM,512GB ECC RDIMM,Up to 2133MHz; Sixteen DIMM Sockets One PCI-E 3.0 x16 Low-Profile Slot; One "0Slot" (x16) Intel i350-AM2 Dual Port GbE LAN Integrated IPMI 2.0 with KVM and Dedicated LAN Three 3.5 Inch Hot-Swap SATA HDD Bays 2000W Redundant Power Supplies Platinum Level (94%) > 3 Ceph OSD servers (70+TB each): Quanta 1U 12-drive storage server D51PH-1ULH Mercury RM112 1U Rackmount Server: 2x Intel Xeon E5-2630v3 procesors 64GB's DDR4 ECC/REG memory 1x64GB SATAdom 2x 200GB Intel DC S3710 SSD's 12x 6TB NL SAS drives 1x dual port 10 Gb EDA/SFP+ OCP network card General System Specifications: Dual Intel Xeon Procesor E5-2600v3 ProductFamily Intel C610 Chipset Sixteen 2133MHz DDR4 RDIMM Memory Twelve 3.5 Inch/2.5 Inch Hot-Plug 12Gb/s SAS or 6Gb/s SATA HDD Four 2.5 Inch Hot-Plug 7mm 6Gb/s SATA Solid State Drive Quanta LSI 3008 12Gb/s SAS Mezanine, RAID 0,1,10 Intel I350 1GbE Dual-Ports One Dedicated 1GbE Management Port |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com