New CRUSH device class questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a 12.2.8 luminous cluster with all NVMe and we want to take some of the NVMe OSDs and allocate them strictly to metadata pools (we have a problem with filling up this cluster and causing lingering metadata problems, and this will guarantee space for metadata operations). In the past, we have done this the old-school way of creating a separate root, but I wanted to see if we could leverage the device class function instead.

Right now all our devices show as ssd rather than nvme, but that is the only class in this cluster. None of the device classes were manually set, so is there a reason they were not detected as nvme?

Is it possible to add a new device class like 'metadata'?

If I set the device class manually, will it be overwritten when the OSD boots up?

Is what I'm trying to accomplish better done by the old-school separate root and the osd_crush_location_hook (potentially using a file with a list of partition UUIDs that should be in the metadata pool).?

Any other options I may not be considering?

Thank you,
Robert LeBlanc
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux