Re: Starting a cluster with one OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 14 May 2016 09:46:23 -0700 Mike Jacobacci wrote:


Hello,

> Hi Alex,
> 
> Thank you for your response! Yes, this is for a production
> environment... Do you think the risk of data loss due to the single node
> be different than if it was an appliance or a Linux box with raid/zfs?
>
Depends.

Ceph by default distributes 3 replicas amongst the storage nodes, giving
you fault tolerances along the lines of RAID6.
So (again by default), the smallest cluster you want to start with is 3
nodes.

OF course you could modify the CRUSH rules to place 3 replicas based on
OSDs, not nodes.

However that only leaves you with 3 disks worth of capacity in your case
and still the data movement Alex mentioned when adding more nodes AND
modifying the CRUSH rules.

Lastly I personally wouldn't deploy anything that's a SPoF in production.
 
Christian

> Cheers,
> Mike
> 
> > On May 13, 2016, at 7:38 PM, Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx>
> > wrote:
> > 
> > 
> > 
> >> On Friday, May 13, 2016, Mike Jacobacci <mikej@xxxxxxxxxx> wrote:
> >> Hello,
> >> 
> >> I have a quick and probably dumb question… We would like to use Ceph
> >> for our storage, I was thinking of a cluster with 3 Monitor and OSD
> >> nodes.  I was wondering if it was a bad idea to start a Ceph cluster
> >> with just one OSD node (10 OSDs, 2 SSDs), then add more nodes as our
> >> budget allows?  We want to spread out the purchases of the OSD nodes
> >> over a month or two but I would like to start moving data over ASAP.
> > 
> > Hi Mike,
> > 
> > Production or test?  I would strongly recommend against one OSD node
> > in production.  Not only risk of hang and data loss due to e.g.
> > Filesystem issue or kernel, but also as you add nodes the data
> > movement will introduce a good deal of overhead.
> > 
> > Regards,
> > Alex
> > 
> >  
> >> 
> >> Cheers,
> >> Mike
> >> 
> >> 
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> > 
> > -- 
> > --
> > Alex Gorbachev
> > Storcium
> > 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux