hallo everybody, i want to split my OSDs on 2 NVMEs (250G) and 1 SSD(900G) for bluestorage . I used the following configuration ``` service_type: osd service_id: osd_spec_a placement: host_pattern: "*" spec: data_devices: paths: - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf db_devices: model : ssd_model --- service_type: osd service_id: osd_spec_b placement: host_pattern: "*" spec: data_devices: paths: - /dev/sdg - /dev/sdh - /dev/sdi - /dev/sdj db_devices: model : nvme_model ``` The SSD is getting partitioned as wished with 4 partitions 230GB each, but cephadm is partitioning the NVME each with two paritions but only 60GB each. So i am loosing almost 120G on each NVME device. 1- My desired size would be 110GB for each partion, will adding `block_db_size/: 120G` /would achieve this? 2- How could i expand the bluestore lvm partitions' size. I tried `lvextend -L 40G /path/to/dev` and `ceph-bluestore-tool bluefs-bdev-expand --path /path/to/dev`, but the last command was failing ``` ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable) 1: /lib64/libpthread.so.0(+0x12ce0) [0x7fa27241fce0] 2: gsignal() 3: abort() 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7fa272fdfba3] 5: /usr/lib64/ceph/libceph-common.so.2(+0x276d6c) [0x7fa272fdfd6c] 6: (BlueStore::expand_devices(std::ostream&)+0x5fd) [0x55eeb41a062d] 7: main() 8: __libc_start_main() 9: _start() NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. Aborted (core dumped) ``` Thanks for any help, Ali _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx