Hi,
On 03/20/2015 01:58 AM, houguanghua
wrote:
Dear all,
Ceph 0.72.2 is deployed in three hosts. But the ceph's status is
HEALTH_WARN . The status is as follows:
# ceph -s
cluster e25909ed-25d9-42fd-8c97-0ed31eec6194
health HEALTH_WARN 768 pgs degraded; 768 pgs stuck
unclean; recovery 2/3 objects degraded (66.667%)
monmap e3: 3 mons at
{ceph-node1=192.168.57.101:6789/0,ceph-node2=192.168.57.102:6789/0,ceph-node3=192.168.57.103:6789/0},
election epoch 34, quorum 0,1,2
ceph-node1,ceph-node2,ceph-node3
osdmap e170: 9 osds: 9 up, 9 in
pgmap v1741: 768 pgs, 7 pools, 36 bytes data, 1
objects
367 MB used, 45612 MB / 45980 MB avail
2/3 objects degraded (66.667%)
768 active+degraded
*snipsnap*
Other info is depicted here.
# ceph osd tree
# id weight type name up/down reweight
-1 0 root default
-7 0 rack rack03
-4 0 host ceph-node3
6 0 osd.6 up 1
7 0 osd.7 up 1
8 0 osd.8 up 1
-6 0 rack rack02
-3 0 host ceph-node2
3 0 osd.3 up 1
4 0 osd.4 up 1
5 0 osd.5 up 1
-5 0 rack rack01
-2 0 host ceph-node1
0 0 osd.0 up 1
1 0 osd.1 up 1
2 0 osd.2 up 1
The weights for all OSD devices are 0. As a result all OSDs are
considered unusable for Ceph and not considered for storing objects
on them.
This problem usually occurs in test setups with very small OSDs
devices. If this is the case in your setup, you can adjust the
weight of the OSDs or use larger devices. If your devices should
have a sufficient size, you need to check why the weights of the
OSDs are not adjusted accordingly.
Best regards,
Burkhard
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com