[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian Balzer <chibi at ...> writes:


> So either make sure these pools really have a replication of 2 by deleting
> and re-creating them or add a third storage node.



I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything 
else I need to do? I still don't see any changes to the status of the cluster. 
We're adding a 3rd storage cluster, but why is it that this is an issue? I 
don't see anything anywhere that says you have to have a minimum number of 
osd's for ceph to function. Even the quick start only has 3, so I assumed 8 
would be fine as well.




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux