Hi, I would like to use some of the blockdb ssd space for OSDs. We provide some radosgw clusters with 8TB and 16TB rotational OSDs. We added 2TB SSDs and use one SSD per 5 8TB OSDs or 3 16TB OSDs. Now there is still space left on the devices and I thought I could just create another LV of 100GB on every SSD and add it to the cluster for the metadata pools (like .index .log .gc) I've read that it is bad to have multiple OSDs one one disk, but I am not sure that this also applies to block.db devices. Those are "normal" enterprise SSD and not NVMEs. I hope to solve the "restarting OSDs" problem with it ( https://tracker.ceph.com/issues/54434#note-4) that we've got since the octopus upgrade. All other things (recreating OSDs, remove SSDs, add SSDs for all OSDs, more disks, less disk, more RAM, ...) _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx