Re: 2TB useable - small business - help appreciated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Sat, 30 Jul 2016 16:51:10 +1000 Richard Thornton wrote:

> Hi,
> 
> Thanks for taking a look, any help you can give would be much appreciated.
> 
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love tinkering.
>
Commendable attitude, horrible idea most likely in the context of running
business critical functions within the parameters given below.
 
> The requirements are simple, I only need a minimum of 2TB (useable) of
> (highly) redundant file storage for our Mac's, Ubuntu and VSphere to
> use, Ubuntu is usually my distro of choice.
> 
And so the horror story begins.
If this were exclusively for Linux clients (Ubuntu), I'd still say "go
ahead", but with Vsphere and Mac's in the picture it's already a nightmare
in the making.

The only way to reliably serve the later two would be via NFS as David
pointed out.
Meaning you're looking at a pacemaker based HA NFS head to Ceph data
coming either from RBD of CephFS, both costing you performance and other
resources you're likely not having.

At this point I'd go for a DRBD/pacemaker/NFS cluster pair and be done.


> I already have the following spare hardware that I could use:
Is this a "hard" list or can you afford more/different HW?

> 
> 4 x Supermicro c2550 servers
Which one exactly?
The one with the 4 hotswap bays?
How much RAM?

Ceph likes fast CPUs, especially with SSDs. 
So these are not a good base to begin with.

> 4 x 24GB Intel SLC drives
Only good for OS.

> 6 x 200GB Intel DC S3700
Nice, but a bit unmatched to the rest of your setup.
With 2 more you could do per node:
2x DC S3700 200GB for journal and cache-tier.
1x HDD for base storage.

> 2 x Intel 750 400GB PCIe NVMe
Consumer grade, low endurance, avoid using with Ceph (for either
journal or cache tier) unless you have _VERY_ low write activity.

> 4 x 2TB 7200rpm drives
At a replication of 3 (which you definitely WANT), just enough.
But as pointed out, this is going to be slow, painfully so.

> 10GBe NICs
>
The least of your worries, unless you go all SSD.
 
> I am a little confused on how I should set it up, I have 4 servers so
> it's going to look more like your example PoC environment, should I
> just use 3 of the 4 servers to save on energy costs (the 4th server
> could be a cold spare)?
> 
As mentioned, with Ceph it's the more the merrier, use them all.

> So I guess I will have my monitor nodes on my OSD nodes.
> 
Indeed they would, if you're still going ahead with this.
Which can work just fine if you have enough RAM, CPU and fast storage for
the leveldb.

> Would I just have each of the physical nodes with just one 2TB disk,
> would I use BlueStore (it looks cool but I read it's not stable until
> later this year)?

That's overly optimistic IMHO, but yes, don't use that.

> 
> I have no idea on what I should do for RGW, RBD and CephFS, should I
> just have them all running on the 3 nodes?
> 
I don't see how RGW and CephFS enter your setup at all, RBD is part of the
Ceph basics, no extra server required for it.

Christian
> Thanks again!
> 
> Richard
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux