well I will first check crush rules if device class is not defined
there. If it is, then you have to create new crush rule and set it to
the affected pools.
On 10/26/23 23:36, Matt Larson wrote:
It is good to know that moving the devices over to a new class is a safe
On Tue, Oct 24, 2023 at 2:16 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
The documentation describes that I could set a device class for an OSD
a command like:
`ceph osd crush set-device-class CLASS OSD_ID [OSD_ID ..]`
Class names can be arbitrary strings like 'big_nvme". Before setting a
device class to an OSD that already has an assigned device class, should
use `ceph osd crush rm-device-class ssd osd.XX`.
Yes, you can re-"name" them by removing old class and setting a new one.
Can I proceed to directly remove these OSDs from the current device class
and assign to a new device class? Should they be moved one by one? What is
the way to safely protect data from the existing pool that they are mapped
Yes, the PGs on them will be misplaced, so if their pool aims to only use
and you re-label them to big-nvme instead, the PGs will look for other
OSDs to land on, and move themselves if possible. It is a fairly safe
they continue to work, but will try to evacuate the PGs which should not
Worst case, your planning is wrong, and the "ssd" OSDs can't accept them,
can just undo the relabel and the PGs come back.
May the most significant bit of your life be positive.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx