attive+degraded cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday, May 16, 2014, Ignazio Cassano <ignaziocassano at gmail.com> wrote:

> Hit all, I successfully installed a ceph cluster firefly version  made up
> of 3 osd and one monitor host.
> After that I created a pool and 1 rdb  object  for kvm .
> It works fine .
> I verified my pool has a replica size = 3 but a I read the default should
> be = 2.
> Trying to shut down an osd and getting it out, ceph health displays
> attive+degraded state and remains in this state until I add again one osd .
> Is this a correct behaviour ?
> Reading documentation I understood that cluster should repair itself going
> in active clean state .
> Is  possible it remains in degraded state because I have a replica size =
> 3 and only 2 osd ?
>
Yep, that's it. You can change the size to 2, if that's really all the
number of copies you need:
ceph osd pool set <foo> size 2
Iirc.
-Greg




> Sorry for my bad english.
>
> Ignazio
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140516/d63142ba/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux