Re: after reboot node appear outside the root root tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This is a common problem when doing custom CRUSHmap, the default behavior is to update the OSD node to location in the CRUSHmap on start. did you keep to the defaults there?

If that is the problem, you can either:
1) Disable the update on start option: "osd crush update on start = false" (see http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location)
2) Customize the script defining the location of OSDs with "crush location hook = /path/to/customized-ceph-crush-location" (see https://github.com/ceph/ceph/blob/master/src/ceph-crush-location.in).

Cheers,
Maxime

On Wed, 13 Sep 2017 at 18:35 German Anders <ganders@xxxxxxxxxxxx> wrote:
# ceph health detail
HEALTH_OK

# ceph osd stat
48 osds: 48 up, 48 in

# ceph pg stat
3200 pgs: 3200 active+clean; 5336 MB data, 79455 MB used, 53572 GB / 53650 GB avail


German

2017-09-13 13:24 GMT-03:00 dE <de.techno@xxxxxxxxx>:
On 09/13/2017 09:08 PM, German Anders wrote:
Hi cephers,

I'm having an issue with a newly created cluster 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc). Basically when I reboot one of the nodes, and when it come back, it come outside of the root type on the tree:

root@cpm01:~# ceph osd tree
ID  CLASS WEIGHT   TYPE NAME          STATUS REWEIGHT PRI-AFF
-15       12.00000 root default
 36  nvme  1.00000     osd.36             up  1.00000 1.00000
 37  nvme  1.00000     osd.37             up  1.00000 1.00000
 38  nvme  1.00000     osd.38             up  1.00000 1.00000
 39  nvme  1.00000     osd.39             up  1.00000 1.00000
 40  nvme  1.00000     osd.40             up  1.00000 1.00000
 41  nvme  1.00000     osd.41             up  1.00000 1.00000
 42  nvme  1.00000     osd.42             up  1.00000 1.00000
 43  nvme  1.00000     osd.43             up  1.00000 1.00000
 44  nvme  1.00000     osd.44             up  1.00000 1.00000
 45  nvme  1.00000     osd.45             up  1.00000 1.00000
 46  nvme  1.00000     osd.46             up  1.00000 1.00000
 47  nvme  1.00000     osd.47             up  1.00000 1.00000
 -7       36.00000 root root
 -5       24.00000     rack rack1
 -1       12.00000         node cpn01
  0        1.00000             osd.0      up  1.00000 1.00000
  1        1.00000             osd.1      up  1.00000 1.00000
  2        1.00000             osd.2      up  1.00000 1.00000
  3        1.00000             osd.3      up  1.00000 1.00000
  4        1.00000             osd.4      up  1.00000 1.00000
  5        1.00000             osd.5      up  1.00000 1.00000
  6        1.00000             osd.6      up  1.00000 1.00000
  7        1.00000             osd.7      up  1.00000 1.00000
  8        1.00000             osd.8      up  1.00000 1.00000
  9        1.00000             osd.9      up  1.00000 1.00000
 10        1.00000             osd.10     up  1.00000 1.00000
 11        1.00000             osd.11     up  1.00000 1.00000
 -3       12.00000         node cpn03
 24        1.00000             osd.24     up  1.00000 1.00000
 25        1.00000             osd.25     up  1.00000 1.00000
 26        1.00000             osd.26     up  1.00000 1.00000
 27        1.00000             osd.27     up  1.00000 1.00000
 28        1.00000             osd.28     up  1.00000 1.00000
 29        1.00000             osd.29     up  1.00000 1.00000
 30        1.00000             osd.30     up  1.00000 1.00000
 31        1.00000             osd.31     up  1.00000 1.00000
 32        1.00000             osd.32     up  1.00000 1.00000
 33        1.00000             osd.33     up  1.00000 1.00000
 34        1.00000             osd.34     up  1.00000 1.00000
 35        1.00000             osd.35     up  1.00000 1.00000
 -6       12.00000     rack rack2
 -2       12.00000         node cpn02
 12        1.00000             osd.12     up  1.00000 1.00000
 13        1.00000             osd.13     up  1.00000 1.00000
 14        1.00000             osd.14     up  1.00000 1.00000
 15        1.00000             osd.15     up  1.00000 1.00000
 16        1.00000             osd.16     up  1.00000 1.00000
 17        1.00000             osd.17     up  1.00000 1.00000
 18        1.00000             osd.18     up  1.00000 1.00000
 19        1.00000             osd.19     up  1.00000 1.00000
 20        1.00000             osd.20     up  1.00000 1.00000
 21        1.00000             osd.21     up  1.00000 1.00000
 22        1.00000             osd.22     up  1.00000 1.00000
 23        1.00000             osd.23     up  1.00000 1.00000
 -4              0         node cpn04

Any ideas of why this happen? and how can I fix it? It supposed to be inside rack2

Thanks in advance,

Best,

German


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Can we see the output of ceph health detail. Maybe they're under the process of recovery.

Also post the output of ceph osd stat so we can see what nodes are up/in etc... and ceph pg stat to see the status of various PGs (a pointer to the recovery process).


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux