# ceph health detail
HEALTH_OK
# ceph osd stat
48 osds: 48 up, 48 in
# ceph pg stat
3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650 GB avail
German
2017-09-13 13:24 GMT-03:00 dE <de.techno@xxxxxxxxx>:
On 09/13/2017 09:08 PM, German Anders wrote:
Hi cephers,
I'm having an issue with a newly created cluster 12.2.0 (32ce2a3ae5239ee33d6150705cdb24 d43bab910c) luminous (rc). Basically when I reboot one of the nodes, and when it come back, it come outside of the root type on the tree:
root@cpm01:~# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-15 12.00000 root default36 nvme 1.00000 osd.36 up 1.00000 1.0000037 nvme 1.00000 osd.37 up 1.00000 1.0000038 nvme 1.00000 osd.38 up 1.00000 1.0000039 nvme 1.00000 osd.39 up 1.00000 1.0000040 nvme 1.00000 osd.40 up 1.00000 1.0000041 nvme 1.00000 osd.41 up 1.00000 1.0000042 nvme 1.00000 osd.42 up 1.00000 1.0000043 nvme 1.00000 osd.43 up 1.00000 1.0000044 nvme 1.00000 osd.44 up 1.00000 1.0000045 nvme 1.00000 osd.45 up 1.00000 1.0000046 nvme 1.00000 osd.46 up 1.00000 1.0000047 nvme 1.00000 osd.47 up 1.00000 1.00000-7 36.00000 root root-5 24.00000 rack rack1-1 12.00000 node cpn010 1.00000 osd.0 up 1.00000 1.000001 1.00000 osd.1 up 1.00000 1.000002 1.00000 osd.2 up 1.00000 1.000003 1.00000 osd.3 up 1.00000 1.000004 1.00000 osd.4 up 1.00000 1.000005 1.00000 osd.5 up 1.00000 1.000006 1.00000 osd.6 up 1.00000 1.000007 1.00000 osd.7 up 1.00000 1.000008 1.00000 osd.8 up 1.00000 1.000009 1.00000 osd.9 up 1.00000 1.0000010 1.00000 osd.10 up 1.00000 1.0000011 1.00000 osd.11 up 1.00000 1.00000-3 12.00000 node cpn0324 1.00000 osd.24 up 1.00000 1.0000025 1.00000 osd.25 up 1.00000 1.0000026 1.00000 osd.26 up 1.00000 1.0000027 1.00000 osd.27 up 1.00000 1.0000028 1.00000 osd.28 up 1.00000 1.0000029 1.00000 osd.29 up 1.00000 1.0000030 1.00000 osd.30 up 1.00000 1.0000031 1.00000 osd.31 up 1.00000 1.0000032 1.00000 osd.32 up 1.00000 1.0000033 1.00000 osd.33 up 1.00000 1.0000034 1.00000 osd.34 up 1.00000 1.0000035 1.00000 osd.35 up 1.00000 1.00000-6 12.00000 rack rack2-2 12.00000 node cpn0212 1.00000 osd.12 up 1.00000 1.0000013 1.00000 osd.13 up 1.00000 1.0000014 1.00000 osd.14 up 1.00000 1.0000015 1.00000 osd.15 up 1.00000 1.0000016 1.00000 osd.16 up 1.00000 1.0000017 1.00000 osd.17 up 1.00000 1.0000018 1.00000 osd.18 up 1.00000 1.0000019 1.00000 osd.19 up 1.00000 1.0000020 1.00000 osd.20 up 1.00000 1.0000021 1.00000 osd.21 up 1.00000 1.0000022 1.00000 osd.22 up 1.00000 1.0000023 1.00000 osd.23 up 1.00000 1.00000-4 0 node cpn04
Any ideas of why this happen? and how can I fix it? It supposed to be inside rack2
Thanks in advance,
Best,
German
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com Can we see the output of ceph health detail. Maybe they're under the process of recovery.
Also post the output of ceph osd stat so we can see what nodes are up/in etc... and ceph pg stat to see the status of various PGs (a pointer to the recovery process).
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com