We have a new cluster being deployed using cephadm. We have 24 18TB HDDs and 4x 2.9TB NVMs per storage node, and are wanting to use the flash drives for both rocks.db/WAL for the 24 spinners as well as flash OSDs. From first inspection it seems like cephadm only supports using a device for a single purpose-- that is, we could split the 4x NVMes up and have 2x be used entirely for db and wal, and the other two would be used for OSDs. We thought it would be a better config to spread the db/wal partitions over all four drives, so that it would take a failure of all 4 to bring down a storage node, rather than two. Can cephadm use partitions instead of whole disks to accomplish this, or is this unsupported? Thanks in advance, Sean _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx