Re: [ceph-commit] HEALTH_WARN 192 pgs degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/10/12 04:40, Sage Weil wrote:
[moved to ceph-devel]

On Wed, 24 Oct 2012, Roman Alekseev wrote:
Hi there,


I've made simple fresh installation of ceph on Debian server with the
following configuration:
************************
[global]
     debug ms = 0
[osd]
     osd journal size = 1000
     filestore xattr use omap = true

[mon.a]

     host = serv1
     mon addr = 192.168.0.10:6789

[osd.0]
      host = serv1

[mds.a]
     host = serv1
************************

Seems everything is working fine but when I perform "ceph health" command I
receive the next message:
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42 degraded
(50.000%)
This is simply because you only have 1 osd but the default policy is 2x
replication.  As such, all PGs are 'degraded' because they are only
replicated once.

If you add another OSD to your cluster the warning will go away.

sage


The other alternative is to just set the pool(s) replication size to 1, if you are just wanting a single osd for (say) testing:

$ ceph osd pool set <your pool(s)> size 1

I find it I need to restart ceph after doing the above, it then sorts itself out to a nice healthy status!

Regards

Mark

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux