I think I was in a hurry, everything is fine now.
root@ceph-osd-1:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_OK
monmap e1: 3 mons at {ceph-osd-1=10.200.1.11:6789/0,ceph-osd-2=10.200.1.12:6789/0,ceph-osd-3=10.200.1.13:6789/0}
election epoch 14, quorum 0,1,2 ceph-osd-1,ceph-osd-2,ceph-osd-3
osdmap e66: 8 osds: 8 up, 8 in
pgmap v1439: 264 pgs, 3 pools, 272 MB data, 653 objects
809 MB used, 31862 MB / 32672 MB avail
264 active+clean
root@ceph-osd-1:/var/log/ceph#
How I can see what's going on in the cluster, what kind of action is running ?
2015-12-18 14:50 GMT+01:00 Reno Rainz <rainzreno@xxxxxxxxx>:
Hi all,I reboot all my osd node after, I got some pg stuck in peering state.root@ceph-osd-3:/var/log/ceph# ceph -scluster 186717a6-bf80-4203-91ed-50d54fe8dec4health HEALTH_WARNclock skew detected on mon.ceph-osd-233 pgs peering33 pgs stuck inactive33 pgs stuck uncleanMonitor clock skew detectedmonmap e1: 3 mons at {ceph-osd-1=10.200.1.11:6789/0,ceph-osd-2=10.200.1.12:6789/0,ceph-osd-3=10.200.1.13:6789/0}election epoch 14, quorum 0,1,2 ceph-osd-1,ceph-osd-2,ceph-osd-3osdmap e66: 8 osds: 8 up, 8 inpgmap v1346: 264 pgs, 3 pools, 272 MB data, 653 objects808 MB used, 31863 MB / 32672 MB avail231 active+clean33 peeringroot@ceph-osd-3:/var/log/ceph#root@ceph-osd-3:/var/log/ceph# ceph pg dump_stuckokpg_stat state up up_primary acting acting_primary4.2d peering [2,0] 2 [2,0] 21.57 peering [3,0] 3 [3,0] 31.24 peering [3,0] 3 [3,0] 31.52 peering [0,2] 0 [0,2] 01.50 peering [2,0] 2 [2,0] 21.23 peering [3,0] 3 [3,0] 34.54 peering [2,0] 2 [2,0] 24.19 peering [3,0] 3 [3,0] 31.4b peering [0,3] 0 [0,3] 01.49 peering [0,3] 0 [0,3] 00.17 peering [0,3] 0 [0,3] 04.17 peering [0,3] 0 [0,3] 04.16 peering [0,3] 0 [0,3] 00.10 peering [0,3] 0 [0,3] 01.11 peering [0,2] 0 [0,2] 04.b peering [0,2] 0 [0,2] 01.3c peering [0,3] 0 [0,3] 00.c peering [0,3] 0 [0,3] 01.3a peering [3,0] 3 [3,0] 30.38 peering [2,0] 2 [2,0] 21.39 peering [0,2] 0 [0,2] 04.33 peering [2,0] 2 [2,0] 24.62 peering [2,0] 2 [2,0] 24.3 peering [0,2] 0 [0,2] 00.6 peering [0,2] 0 [0,2] 00.4 peering [2,0] 2 [2,0] 20.3 peering [2,0] 2 [2,0] 21.60 peering [0,3] 0 [0,3] 00.2 peering [3,0] 3 [3,0] 34.6 peering [3,0] 3 [3,0] 31.30 peering [0,3] 0 [0,3] 01.2f peering [0,2] 0 [0,2] 01.2a peering [3,0] 3 [3,0] 3root@ceph-osd-3:/var/log/ceph#root@ceph-osd-3:/var/log/ceph# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-9 4.00000 root default-8 4.00000 region eu-west-1-6 2.00000 datacenter eu-west-1a-2 2.00000 host ceph-osd-10 1.00000 osd.0 up 1.00000 1.000001 1.00000 osd.1 up 1.00000 1.00000-4 2.00000 host ceph-osd-34 1.00000 osd.4 up 1.00000 1.000005 1.00000 osd.5 up 1.00000 1.00000-7 2.00000 datacenter eu-west-1b-3 2.00000 host ceph-osd-22 1.00000 osd.2 up 1.00000 1.000003 1.00000 osd.3 up 1.00000 1.00000-5 2.00000 host ceph-osd-46 1.00000 osd.6 up 1.00000 1.000007 1.00000 osd.7 up 1.00000 1.00000root@ceph-osd-3:/var/log/ceph#Do you have guys any idea ? Why they stay in this state ?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com