Re: Big or small node ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'd almost always go with more lesser beefy nodes than bigger ones. You're much more vulnerable if the big one(s) die and replication will not impact your cluster as much.

I also find it easier to extend a cluster with smaller nodes. At least it feels like you can increase in more smooth rates with smaller nodes at your preferred rate instead of big chunks of extra added storage.

But I guess it depends on intended cluster usage.

In your example you can loose 1 of the smaller nodes (depending about replication level and total space usage ofc), but loosing the big one means nothing works.

If only 1 node I would prob. not go ceph and just opt for zfs or raid6 instead (and drop the extra ssd and get 12x sata) - it will prob. perform better and you'll have more total space assuming you'll go with replication x2 with ceph.

Cheers,
Martin


On Wed, Nov 20, 2013 at 8:47 AM, Daniel Schwager <Daniel.Schwager@xxxxxxxx> wrote:
Hallo,

we are going to setup a 36-40TB (brutto) test setup in our company for disk2disk2tape backup. Now, we are faced to decide if we go the high- or low density ceph way.

--- The big node (only one necessary):

1 x Supermicro, System 6027R-E1R12T with 2 x CPU E5-2620v2  (6 core (Hyper threading) per CPU), 32 GB with
- 2 x SSD  80GB  (RAID1, OS only)
- 2 x SSDD 100GB, Intel s3700  (bandwidth R/W: 500/200MB/sec) for 10 journal (5 each SSD)
- 10 x 4TB Seagate Enterprise Capacity ST4000NM0033
- 2 x embedded 10GBit for public/storage network

So, I' would install also 1 monitor on this node containing also 10 OSD's.
The price is about 20 US Cent / GB

---- The small node (ok - we have to by 3 guy's) could be like

3 x Supermicro, SuperServer 6017R-TDF+ with 1 x E5-2603 (4 cores without Hyper threading), 16GB with
1 x 120 GB SSD Intel Serie S3500 SATA for OS and 3 OSD journal's (bandwidth R/W: 340/100 MB/sec)
2 x Intel® PRO/1000 PT Dual Port Server Adapter (LACP link aggregation, 2 ports for public-, 2 ports for storage network)
3 x 4TB Seagate Enterprise Capacity ST4000NM0033

I would also install a monitor on each node.
The price is about 23 US Cent / GB

I think, the performance (because of the "better" components like 10GBit, SSD, CPU) is much better in the big node. Because we may not add more HDD to the cluster, I'm not sure how to decide - big or small node.

Is there a recommendation? Maybe also in regards to my chosen hardware components ?

best regards
Danny










_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux