Re: phantom osd.0 in osd tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As someone else mentioned, ‘ceph osd rm 0’ took it out of the osd tree.

Crush map attached. Odd seeing deviceN in the devices block in the osd numbering holes in my cluster. Assume that is just a placeholder until it gets backfilled with an osd upon expansion.

Thanks,

Reed

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 device0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 device8
device 9 osd.9

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host node24 {
	id -2		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.1 weight 7.275
}
host node25 {
	id -3		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.2 weight 7.275
}
host node26 {
	id -4		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.3 weight 7.275
}
host node27 {
	id -5		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.4 weight 7.275
}
host node28 {
	id -6		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.5 weight 7.275
}
host node29 {
	id -7		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.6 weight 7.275
}
host node30 {
	id -8		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.9 weight 7.275
}
host node31 {
	id -9		# do not change unnecessarily
	# weight 7.275
	alg straw
	hash 0	# rjenkins1
	item osd.7 weight 7.275
}
root default {
	id -1		# do not change unnecessarily
	# weight 58.200
	alg straw
	hash 0	# rjenkins1
	item node24 weight 7.275
	item node25 weight 7.275
	item node26 weight 7.275
	item node27 weight 7.275
	item node28 weight 7.275
	item node29 weight 7.275
	item node30 weight 7.275
	item node31 weight 7.275
}

# rules
rule replicated_ruleset {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type host
	step emit
}

# end crush map


On Aug 24, 2016, at 12:56 AM, M Ranga Swami Reddy <swamireddy@xxxxxxxxx> wrote:

Please share the crushmap.

Thanks
Swami

On Tue, Aug 23, 2016 at 11:49 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
Trying to hunt down a mystery osd populated in the osd tree.

Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at time of deployment, but since upgraded to 10.2.2.

For reference, mons and mds do not live on the osd nodes, and the admin node is neither mon, mds, or osd.

Attempting to remove it from the crush map, it says that osd.0 does not exist.

Just looking for some insight into this mystery.

Thanks

# ceph osd tree
ID WEIGHT   TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 58.19960 root default
-2  7.27489     host node24
 1  7.27489         osd.1        up  1.00000          1.00000
-3  7.27489     host node25
 2  7.27489         osd.2        up  1.00000          1.00000
-4  7.27489     host node26
 3  7.27489         osd.3        up  1.00000          1.00000
-5  7.27489     host node27
 4  7.27489         osd.4        up  1.00000          1.00000
-6  7.27489     host node28
 5  7.27489         osd.5        up  1.00000          1.00000
-7  7.27489     host node29
 6  7.27489         osd.6        up  1.00000          1.00000
-8  7.27539     host node30
 9  7.27539         osd.9        up  1.00000          1.00000
-9  7.27489     host node31
 7  7.27489         osd.7        up  1.00000          1.00000
 0        0 osd.0              down        0          1.00000
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux