Hi Nick,
Hi, Just a couple of points, you might want to see if you can get a Xeon v3 board+CPU as they have more performance and use less power. ok
You can also get a SM 2U chassis which has 2x 2.5” disk slots at the rear, this would allow you to have an extra 2x 3.5” disks in the front of the server. These two rear slots will be used for the Operating System's SSD
Extra ram in the OSD nodes would probably help performance a bit ok
How many nodes are you going to have? You might find that bonded 10G networking is sufficient instead of the extra cost of 40GB networking. I think about 14 o 16 OSD nodes.
3 Metadata/Monitor nodes
Nick Thanks
Regards
Marco
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Colombo Marco Hi all, I have to build a new Ceph storage cluster, after i‘ve read the hardware recommendations and some mail from this mailing list i would like to buy these servers: OSD: SSG-6027R-E1R12L -> http://xo4t.mj.am/link/xo4t/grkj3rk/1/m3tngGzWbOpwg5uXd5lPdw/aHR0cDovL3d3dy5zdXBlcm1pY3JvLm5sL3Byb2R1Y3RzL3N5c3RlbS8yVS82MDI3L1NTRy02MDI3Ui1FMVIxMkwuY2Zt Intel Xeon e5-2630 v2 64 GB RAM LSI 2308 IT 2 x SSD Intel DC S3700 400GB 2 x SSD Intel DC S3700 200GB 8 x HDD Seagate Enterprise 6TB 2 x 40GbE for backend network 2 x 10GbE for public network META/MON: SYS-6017R-72RFTP -> http://xo4t.mj.am/link/xo4t/grkj3rk/2/Fc3dQ9lM7vImlEFAB-_wDg/aHR0cDovL3d3dy5zdXBlcm1pY3JvLmNvbS9wcm9kdWN0cy9zeXN0ZW0vMVUvNjAxNy9TWVMtNjAxN1ItNzJSRlRQLmNmbQ 2 x Intel Xeon e5-2637 v2 4 x SSD Intel DC S3500 240GB raid 1+0 128 GB RAM 2 x 10 GbE What do you think? Any feedbacks, advices, or ideas are welcome! Thanks so much Regards, |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com