HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2 Jul 2014 14:25:49 +0000 (UTC) Brian Lovett wrote:

> Christian Balzer <chibi at ...> writes:
> 
> 
> > So either make sure these pools really have a replication of 2 by
> > deleting and re-creating them or add a third storage node.
> 
> 
> 
> I just executed "ceph osd pool set {POOL} size 2" for both pools.
> Anything else I need to do? I still don't see any changes to the status
> of the cluster. We're adding a 3rd storage cluster, but why is it that
> this is an issue? I don't see anything anywhere that says you have to
> have a minimum number of osd's for ceph to function. Even the quick
> start only has 3, so I assumed 8 would be fine as well.
> 

Read EVERYTHING you can find about crushmap rules.

The quickstart (I think) talks about 3 storage nodes, not OSDs.

Ceph is quite good when it comes to defining failure domains, the default
is to segregate at the storage node level.
What good is a replication of 3 when all 3 OSDs are on the same host?

Christian
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux