Re: PG state issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Henry, Wido,

On Thu, 1 Mar 2012, Henry C Chang wrote:
> With version 0.42, I found that the pg's state is not in "degrade" any
> more if the number of osds is smaller than that of the replication.
> For example, if I create a cluster of one osd with replication 2, all
> pgs are in active+clean state. (The pgs were in active+clean+degrade
> state in the earlier versions.)

The PG states were tweaked a fair bit for v0.43:

- new 'recovering' state means we are actively recovering the PG (no 
  longer implied by lack of 'clean')
- 'remapped' means we have temporarily remapped a pg to a specific set of 
  OSDs (other than what CRUSH gives us)
- 'clean' specifically means we have the right number of replicas and 
  aren't remapped.

...and the 'degraded' thing you are seeing is fixed.  This is all in place 
in the 'next' or 'master' branches.

> I think it should be a bug in terms of cluster status although it does
> not cause any other problems to me so far.

Yeah, it's simply a matter of how the internal state is displayed/reported 
to the moitor.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux