Re: 2TB useable - small business - help appreciated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Richard, 

It would be useful to know what you're currently using for storage as that would help in recommending a strategy. My guess is an all CephFS set up might be best for your use case. I haven't tested this myself but I'd mount CephFS on the OSD nodes with the Fuse client and export over NFS or Samba. So something resembling a Gluster set up. My preference would be to use separate gateway servers but if you are limited to those 4 servers I don't think you have another option.

Your Ubuntu clients could mount CephFS directly. The OSX clients Samba or NFS. I'm no expert on ESX integration but from recent threads on this list it seems like NFS is the simplest way of getting some decent performance at the moment.

If you only have those servers to work with, 3 servers would run Mons and the non-Mon server should run the active MDS. I'd run a standby or standby-replay MDS on the Mon server with the highest IP:port. If you've got any spare RAM handy, stick as much as you can in the MDS.

Agree with Wido that all SSD would be the way to go for such a small capacity requirement.

On Sat, Jul 30, 2016 at 2:12 PM, Wido den Hollander <wido@xxxxxxxx> wrote:

> Op 30 juli 2016 om 8:51 schreef Richard Thornton <richie.thornton@xxxxxxxxx>:
>
>
> Hi,
>
> Thanks for taking a look, any help you can give would be much appreciated.
>
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love tinkering.
>
> The requirements are simple, I only need a minimum of 2TB (useable) of
> (highly) redundant file storage for our Mac's, Ubuntu and VSphere to
> use, Ubuntu is usually my distro of choice.
>

What will you be using? RBD, CephFS?

> I already have the following spare hardware that I could use:
>
> 4 x Supermicro c2550 servers
> 4 x 24GB Intel SLC drives
> 6 x 200GB Intel DC S3700
> 2 x Intel 750 400GB PCIe NVMe
> 4 x 2TB 7200rpm drives
> 10GBe NICs
>

Since you only need 2TB of usable storage I would suggest to skip spinning disks and go completely to SSD/Flash.

For example, take the Samsun PM836 SSDs. They go up to 4TB per SSD right now. They aren't cheap, but the price per I/O is low. Spinning disks are cheap with storage, but very expensive per I/O.

Per server:
- SSD for OS (simple one)
- Multiple SSDs for OSD

> I am a little confused on how I should set it up, I have 4 servers so
> it's going to look more like your example PoC environment, should I
> just use 3 of the 4 servers to save on energy costs (the 4th server
> could be a cold spare)?
>

No, more machines is better. I would go for 4.

> So I guess I will have my monitor nodes on my OSD nodes.
>
> Would I just have each of the physical nodes with just one 2TB disk,
> would I use BlueStore (it looks cool but I read it's not stable until
> later this year)?
>
> I have no idea on what I should do for RGW, RBD and CephFS, should I
> just have them all running on the 3 nodes?
>

I always try to spread services. MONs on dedicated hardware, OSDs and the same with RGW and CephFS MDS servers.

It is not a requirement persé, but it makes things easier to run.

Wido

> Thanks again!
>
> Richard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux