Re: 2TB useable - small business - help appreciated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 30 juli 2016 om 8:51 schreef Richard Thornton <richie.thornton@xxxxxxxxx>:
> 
> 
> Hi,
> 
> Thanks for taking a look, any help you can give would be much appreciated.
> 
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love tinkering.
> 
> The requirements are simple, I only need a minimum of 2TB (useable) of
> (highly) redundant file storage for our Mac's, Ubuntu and VSphere to
> use, Ubuntu is usually my distro of choice.
> 

What will you be using? RBD, CephFS?

> I already have the following spare hardware that I could use:
> 
> 4 x Supermicro c2550 servers
> 4 x 24GB Intel SLC drives
> 6 x 200GB Intel DC S3700
> 2 x Intel 750 400GB PCIe NVMe
> 4 x 2TB 7200rpm drives
> 10GBe NICs
> 

Since you only need 2TB of usable storage I would suggest to skip spinning disks and go completely to SSD/Flash.

For example, take the Samsun PM836 SSDs. They go up to 4TB per SSD right now. They aren't cheap, but the price per I/O is low. Spinning disks are cheap with storage, but very expensive per I/O.

Per server:
- SSD for OS (simple one)
- Multiple SSDs for OSD

> I am a little confused on how I should set it up, I have 4 servers so
> it's going to look more like your example PoC environment, should I
> just use 3 of the 4 servers to save on energy costs (the 4th server
> could be a cold spare)?
> 

No, more machines is better. I would go for 4.

> So I guess I will have my monitor nodes on my OSD nodes.
> 
> Would I just have each of the physical nodes with just one 2TB disk,
> would I use BlueStore (it looks cool but I read it's not stable until
> later this year)?
> 
> I have no idea on what I should do for RGW, RBD and CephFS, should I
> just have them all running on the 3 nodes?
> 

I always try to spread services. MONs on dedicated hardware, OSDs and the same with RGW and CephFS MDS servers.

It is not a requirement persé, but it makes things easier to run.

Wido

> Thanks again!
> 
> Richard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux