I am looking to create a new pool that would be backed by a particular set of drives that are larger nVME SSDs (Intel SSDPF2NV153TZ, 15TB drives). Particularly, I am wondering about what is the best way to move devices from one pool and to direct them to be used in a new pool to be created. In this case, the documentation suggests I could want to assign them to a new device-class and have a placement rule that targets that device-class in the new pool. Currently the Ceph cluster has two device classes 'hdd' and 'ssd', and the larger 15TB drives were automatically assigned to the 'ssd' device class that is in use by a different pool. The `ssd` device classes are used in a placement rule targeting that class. The documentation describes that I could set a device class for an OSD with a command like: `ceph osd crush set-device-class CLASS OSD_ID [OSD_ID ..]` Class names can be arbitrary strings like 'big_nvme". Before setting a new device class to an OSD that already has an assigned device class, should use `ceph osd crush rm-device-class ssd osd.XX`. Can I proceed to directly remove these OSDs from the current device class and assign to a new device class? Should they be moved one by one? What is the way to safely protect data from the existing pool that they are mapped to? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx