attive+degraded cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hit all, I successfully installed a ceph cluster firefly version  made up
of 3 osd and one monitor host.
After that I created a pool and 1 rdb  object  for kvm .
It works fine .
I verified my pool has a replica size = 3 but a I read the default should
be = 2.
Trying to shut down an osd and getting it out, ceph health displays
attive+degraded state and remains in this state until I add again one osd .
Is this a correct behaviour ?
Reading documentation I understood that cluster should repair itself going
in active clean state .
Is  possible it remains in degraded state because I have a replica size = 3
and only 2 osd ?

Sorry for my bad english.

Ignazio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140516/2cec4e9f/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux