Hi,
In my test cluster where I've just 1 OSD which's up and in -- 1 osds: 1 up, 1 in
ceph -c /etc/ceph/cluster.conf health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive; Degraded data redundancy: 2 pgs unclean; too few PGs per OSD (2 < min 30)
PG_AVAILABILITY Reduced data availability: 2 pgs inactive
pg 1.0 is stuck inactive for 608.785938, current state unknown, last acting []
pg 1.1 is stuck inactive for 608.785938, current state unknown, last acting []
PG_DEGRADED Degraded data redundancy: 2 pgs unclean
pg 1.0 is stuck unclean for 608.785938, current state unknown, last acting []
pg 1.1 is stuck unclean for 608.785938, current state unknown, last acting []
TOO_FEW_PGS too few PGs per OSD (2 < min 30)
Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com