Are all the OSDs in the same crush root? I would think that since the
crush weight of hosts change as soon as OSDs are out it impacts the
whole crush tree. If you separate the SSDs from the HDDs logically
(e.g. different bucket type in the crush tree) the ramapping wouldn't
affect the HDDs.
Zitat von Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:
I have been converting ssd's osd's to dmcrypt, and I have noticed that
pg's of pools are migrated that should be (and are?) on hdd class.
On a healthy ok cluster I am getting, when I set the crush reweight to
0.0 of a ssd osd this:
17.35 10415 0 0 9907 0
36001743890 0 0 3045 3045
active+remapped+backfilling 2020-09-27 12:55:49.093054 83758'20725398
83758:100379720 [8,14,23] 8 [3,14,23] 3
83636'20718129 2020-09-27 00:58:07.098096 83300'20689151 2020-09-24
21:42:07.385360 0
However osds 3,14,23,8 are all hdd osd's
Since this is a cluster from Kraken/Luminous, I am not sure if the
device class of the replicated_ruleset[1] was set when the pool 17 was
created.
Weird thing is that all pg's of this pool seem to be on hdd osd[2]
Q. How can I display the definition of 'crush_rule 0' at the time of the
pool creation? (To be sure it had already this device class hdd
configured)
[1]
[@~]# ceph osd pool ls detail | grep 'pool 17'
pool 17 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 83712
flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
[@~]# ceph osd crush rule dump replicated_ruleset
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -10,
"item_name": "default~hdd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
[2]
[@~]# for osd in `ceph pg dump pgs| grep '^17' | awk '{print $17" "$19}'
| grep -oE '[0-9]{1,2}'| sort -u -n`; do ceph osd crush get-device-class
osd.$osd ; done | sort -u
dumped pgs
hdd
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx