Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean since forever" errors happen(as the following), can help point out the possible reasons for this? thanks.
ceph -s
cluster 602176c1-4937-45fc-a246-cc16f1066f65
health HEALTH_WARN
8 pgs degraded
8 pgs stuck unclean
8 pgs undersized
too few PGs per OSD (2 < min 30)
monmap e1: 1 mons at {ceph-01=172.0.0.11:6789/0}
election epoch 14, quorum 0 ceph-01
osdmap e89: 3 osds: 3 up, 3 in
flags
pgmap v310: 8 pgs, 1 pools, 0 bytes data, 0 objects
60112 MB used, 5527 GB / 5586 GB avail
8 active+undersized+degraded
ceph health detail
HEALTH_WARN 8 pgs degraded; 8 pgs stuck unclean; 8 pgs undersized; too few PGs per OSD (2 < min 30)
pg 5.0 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.1 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.2 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.3 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.7 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.6 is stuck unclean since forever, current state active+undersized+degraded, last acting [2]
pg 5.5 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.4 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.7 is active+undersized+degraded, acting [3]
pg 5.6 is active+undersized+degraded, acting [2]
pg 5.5 is active+undersized+degraded, acting [4]
pg 5.4 is active+undersized+degraded, acting [4]
pg 5.3 is active+undersized+degraded, acting [4]
pg 5.2 is active+undersized+degraded, acting [3]
pg 5.1 is active+undersized+degraded, acting [3]
pg 5.0 is active+undersized+degraded, acting [3]
too few PGs per OSD (2 < min 30)
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 3.00000 root default
-2 3.00000 host ceph-01
2 1.00000 osd.2 up 1.00000 1.00000
3 1.00000 osd.3 up 1.00000 1.00000
4 1.00000 osd.4 up 1.00000 1.00000
ceph osd crush tree
[
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 10,
"items": [
{
"id": -2,
"name": "ceph-01",
"type": "host",
"type_id": 1,
"items": [
{
"id": 2,
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
},
{
"id": 3,
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
},
{
"id": 4,
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
}
]
}
]
}
]
ceph -s
cluster 602176c1-4937-45fc-a246-cc16f1066f65
health HEALTH_WARN
8 pgs degraded
8 pgs stuck unclean
8 pgs undersized
too few PGs per OSD (2 < min 30)
monmap e1: 1 mons at {ceph-01=172.0.0.11:6789/0}
election epoch 14, quorum 0 ceph-01
osdmap e89: 3 osds: 3 up, 3 in
flags
pgmap v310: 8 pgs, 1 pools, 0 bytes data, 0 objects
60112 MB used, 5527 GB / 5586 GB avail
8 active+undersized+degraded
ceph health detail
HEALTH_WARN 8 pgs degraded; 8 pgs stuck unclean; 8 pgs undersized; too few PGs per OSD (2 < min 30)
pg 5.0 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.1 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.2 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.3 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.7 is stuck unclean since forever, current state active+undersized+degraded, last acting [3]
pg 5.6 is stuck unclean since forever, current state active+undersized+degraded, last acting [2]
pg 5.5 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.4 is stuck unclean since forever, current state active+undersized+degraded, last acting [4]
pg 5.7 is active+undersized+degraded, acting [3]
pg 5.6 is active+undersized+degraded, acting [2]
pg 5.5 is active+undersized+degraded, acting [4]
pg 5.4 is active+undersized+degraded, acting [4]
pg 5.3 is active+undersized+degraded, acting [4]
pg 5.2 is active+undersized+degraded, acting [3]
pg 5.1 is active+undersized+degraded, acting [3]
pg 5.0 is active+undersized+degraded, acting [3]
too few PGs per OSD (2 < min 30)
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 3.00000 root default
-2 3.00000 host ceph-01
2 1.00000 osd.2 up 1.00000 1.00000
3 1.00000 osd.3 up 1.00000 1.00000
4 1.00000 osd.4 up 1.00000 1.00000
ceph osd crush tree
[
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 10,
"items": [
{
"id": -2,
"name": "ceph-01",
"type": "host",
"type_id": 1,
"items": [
{
"id": 2,
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
},
{
"id": 3,
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
},
{
"id": 4,
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
}
]
}
]
}
]
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com