Hi ceph-users, I am currently planning a cluster and would like some input specifically about the storage-nodes. The non-osd systems will be running on more powerful system. Interconnect as currently planned: 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned: EX3300) So far I would go with Supermicros 5018A-MHN4 offering, rack-space is not really a concern, so only 4 OSDs per U is fine. (The cluster is planned to start with 8 osd-nodes.) osd-node: Avoton C2758 - 8 x 2.40GHz 16 GB RAM ECC 16 GB SSD - OS - SATA-DOM 250GB SSD - Journal (MX200 250GB with extreme over-provisioning, staggered deployment, monitored for TBW-Value) 4 x 3 TB OSD - Seagate Surveillance HDD (ST3000VX000) 7200rpm 24/7 4 x 1 Gbit per-osd breakdown: 3 TB HDD 2 x 2.40GHz (Avoton-Cores) 4 GB RAM 8 GB SSD-Journal (~125 MB/s r/w) 1 Gbit The main question is, will the Avoton CPU suffice? (I recon the common 1GHz/OSD suggestion are in regards to much more powerful CPUs.) Are there any cost-effective suggestions to improve this configuration? Will erasure coding be a feasible possibility? Does it hurt to run OSD-nodes CPU-capped, if you have enough of them? ___ Dominik Hannen _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com