[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian Balzer <chibi at ...> writes:

> Read EVERYTHING you can find about crushmap rules.
> 
> The quickstart (I think) talks about 3 storage nodes, not OSDs.
> 
> Ceph is quite good when it comes to defining failure domains, the default
> is to segregate at the storage node level.
> What good is a replication of 3 when all 3 OSDs are on the same host?


Agreed, which is why I had defined the default as 2 replicas. I had hoped that 
this would work, but I will be adding a third host hopefully today or 
tomorrow. hopefully that takes care of the issue. I'll try another fresh 
install and see if I can get things going.






[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux