Re: Cost- and Powerefficient OSD-Nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dominik,

Answers in line

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Dominik Hannen
> Sent: 28 April 2015 10:35
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  Cost- and Powerefficient OSD-Nodes
> 
> Hi ceph-users,
> 
> I am currently planning a cluster and would like some input specifically
about
> the storage-nodes.
> 
> The non-osd systems will be running on more powerful system.
> 
> Interconnect as currently planned:
> 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned:
> EX3300)

If you can do 10GB networking its really worth it. I found that with 1G,
latency effects your performance before you max out the bandwidth. We got
some Supermicro servers with 10GB-T onboard for a tiny price difference and
some basic 10GB-T switches.

> 
> So far I would go with Supermicros 5018A-MHN4 offering, rack-space is not
> really a concern, so only 4 OSDs per U is fine.
> (The cluster is planned to start with 8 osd-nodes.)
> 
> osd-node:
> Avoton C2758 - 8 x 2.40GHz
> 16 GB RAM ECC
> 16 GB SSD - OS - SATA-DOM
> 250GB SSD - Journal (MX200 250GB with extreme over-provisioning,
> staggered deployment, monitored for TBW-Value)
> 4 x 3 TB OSD - Seagate Surveillance HDD (ST3000VX000) 7200rpm 24/7
> 4 x 1 Gbit
> 

Not sure if that SSD would be suitable for a journal. I would recommend
going with one of the Intel 3700's. You could also save a bit and run the OS
from it.

Would also possibly consider a more NAS/Enterprise friendly HDD

> per-osd breakdown:
> 3 TB HDD
> 2 x 2.40GHz (Avoton-Cores)
> 4 GB RAM
> 8 GB SSD-Journal (~125 MB/s r/w)
> 1 Gbit
> 
> The main question is, will the Avoton CPU suffice? (I recon the common
> 1GHz/OSD suggestion are in regards to much more powerful CPUs.)
> 

CPU might be on the limit, but would probably suffice. If anything you won't
max out all the cores, but the overall speed of the CPU might increase
latency, which may or may not be a problem for you.

> Are there any cost-effective suggestions to improve this configuration?

Have you looked at a normal Xeon based server but with more disks per node?
Depending on how much capacity you need spending a little more per server
but allowing you to have more disks per server might work out cheaper.

There are some interesting SuperMicro combinations, or if you want to go
really cheap, you could buy Case,MB,CPU...etc separately and build yourself.

> 
> Will erasure coding be a feasible possibility?

Should be fine

> 
> Does it hurt to run OSD-nodes CPU-capped, if you have enough of them?

They can timeout and start flapping in and out of the cluster

> 
> ___
> Dominik Hannen
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux