Re: Cost- and Powerefficient OSD-Nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We tested the m500 960GB for journaling and found at most it could journal 3 spinner OSDs. I'd strongly recommend you avoid the Crucial consumer drives based on our testing/usage. We ended up journaling those to the spinner itself and getting better performance. Also, I wouldn't trust their power loss protection and would assume a host is dead if it ever powers down unexpectedly with those as journal devices.

On Tue, Apr 28, 2015 at 5:34 AM, Dominik Hannen <hannen@xxxxxxxxx> wrote:
Hi ceph-users,

I am currently planning a cluster and would like some input specifically about the storage-nodes.

The non-osd systems will be running on more powerful system.

Interconnect as currently planned:
4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned: EX3300)

So far I would go with Supermicros 5018A-MHN4 offering, rack-space is not really a concern, so only 4 OSDs per U is fine.
(The cluster is planned to start with 8 osd-nodes.)

osd-node:
Avoton C2758 - 8 x 2.40GHz
16 GB RAM ECC
16 GB SSD - OS - SATA-DOM
250GB SSD - Journal (MX200 250GB with extreme over-provisioning, staggered deployment, monitored for TBW-Value)
4 x 3 TB OSD - Seagate Surveillance HDD (ST3000VX000) 7200rpm 24/7
4 x 1 Gbit

per-osd breakdown:
3 TB HDD
2 x 2.40GHz (Avoton-Cores)
4 GB RAM
8 GB SSD-Journal (~125 MB/s r/w)
1 Gbit

The main question is, will the Avoton CPU suffice? (I recon the common 1GHz/OSD suggestion are in regards to much more powerful CPUs.)

Are there any cost-effective suggestions to improve this configuration?

Will erasure coding be a feasible possibility?

Does it hurt to run OSD-nodes CPU-capped, if you have enough of them?

___
Dominik Hannen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media

e: david@xxxxxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux