You don't need to create OSDs manual to get what you want. Cephadm has two options to control that in OSD specification. OSD Service — Ceph Documentation <https://docs.ceph.com/en/latest/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec> block_db_size*: Union[int, str, None]* <https://docs.ceph.com/en/latest/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.block_db_size> Set (or override) the “bluestore_block_db_size” value, in bytes db_slots <https://docs.ceph.com/en/latest/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.db_slots> How many OSDs per DB device You can use these two options to control the size allocation on ssd. regards, Anh Phan On Tue, Aug 22, 2023 at 11:56 PM Alfredo Rezinovsky <alfrenovsky@xxxxxxxxx> wrote: > SSD driver works awful when full. > Even if I set DB to ssd for 4 OSDs and theres 2 the dashboard daemon > allocates all the ssd. > > I want to partition only 70% of the SSD for DB/WAL and leave the rest for > SSD maneouvering. > > There's a way to create an OSD telling manually disk or partitions to user > for data and DB (like the way I used to do it with ceph-deploy)? > > > -- > Alfrenovsky > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx