On Mon, 30 Sep 2019, Reed Dier wrote: > I currently have two roots in my crush map, one for HDD devices and one for SSD devices, and have had it that way since Jewel. > > I am currently on Nautilus, and have had my crush device classes for my OSD's set since Luminous. > > > ID CLASS WEIGHT TYPE NAME > > -13 105.37599 root ssd > > -11 105.37599 rack ssd.rack2 > > -14 17.61099 host ceph00 > > 24 ssd 1.76099 osd.24 > > -1 398.92554 root default > > -10 397.07343 rack default.rack2 > > -70 44.45032 chassis ceph05 > > -67 44.45032 host ceph05 > > 74 hdd 1.85210 osd.74 > > > I have crush rulesets that distribute based on the roots for each device class. > > > [ > > { > > "rule_id": 0, > > "rule_name": "replicated_ruleset", > > "ruleset": 0, > > "type": 1, > > "min_size": 1, > > "max_size": 10, > > "steps": [ > > { > > "op": "take", > > "item": -1, > > "item_name": "default" > > }, > > { > > "op": "chooseleaf_firstn", > > "num": 0, > > "type": "chassis" > > }, > > { > > "op": "emit" > > } > > ] > > }, > > { > > "rule_id": 1, > > "rule_name": "ssd_ruleset", > > "ruleset": 1, > > "type": 1, > > "min_size": 1, > > "max_size": 10, > > "steps": [ > > { > > "op": "take", > > "item": -13, > > "item_name": "ssd" > > }, > > { > > "op": "chooseleaf_firstn", > > "num": 0, > > "type": "host" > > }, > > { > > "op": "emit" > > } > > ] > > }, > > { > > "rule_id": 2, > > "rule_name": "hybrid_ruleset", > > "ruleset": 2, > > "type": 1, > > "min_size": 1, > > "max_size": 10, > > "steps": [ > > { > > "op": "take", > > "item": -13, > > "item_name": "ssd" > > }, > > { > > "op": "chooseleaf_firstn", > > "num": 1, > > "type": "host" > > }, > > { > > "op": "emit" > > }, > > { > > "op": "take", > > "item": -1, > > "item_name": "default" > > }, > > { > > "op": "chooseleaf_firstn", > > "num": -1, > > "type": "chassis" > > }, > > { > > "op": "emit" > > } > > ] > > } > > > If I wanted to migrate to rulesets based on device class with minimal disruption, what are my options? > > In my mind the way this would work would be to > 1. Set the norebalance flag > 2. Rework my crush rulesets to use takes based on class rather than root. > 3. Merge my ssd hosts from the ssd root to the default root > 4. Let things rebalance? > > Would prefer minimal data movement, as that would be potentially disruptive, and I imagine provide minimal gain for me, but possibly better data distribution? You can do this with no data movement using the crushtool reclassify function. There are a few different ways it can work, so it'll depend slightly on what yoru current map looks like. See https://github.com/ceph/ceph/blob/master/src/test/cli/crushtool/reclassify.t for the set of tests. You'll want to extract your current crush map (ceph osd getcrushmap -o cm), operate on that with crusthool, and compare the mappings from the original and your modified version to ensure that no data moves. Hopefully that test file above will be enough to guide you! Here are the docs: https://docs.ceph.com/docs/master/rados/operations/crush-map-edits/#crush-reclassify sage _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx