Hi, we want to provision OSDs on nodes with 36 18TB HDDs, their RocksDBs should be stored on 960GB SSDs (6 DB slots per OSD). The is Ceph version 16.2.7 from RedHat Ceph Storage 5.1. When using this YAML service specfication: service_type: osd service_id: HDD-OSDs placement: label: 'hddosd' data_devices: rotational: 1 size: '16.37TB' db_devices: rotational: 0 size: '894.25GB' db_slots: 6 the OSDs get created but their DB device is only 24.84 GB large, which is exactly 894.25 divided by 36. We tried to add "block_db_size: '128GB'" to the YAML but this generates an error of ceph-volume, as it is supposedly larger than the max available size: 128.00 GB was requested for block_db_size, but only 24.84 GB can be fulfilled It is also interesting that the "db_slots: 6" does not get injected into the command line of ceph-volume: Non-zero exit code 1 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:06255c43a5ccaec516969637a39d500a0354da26127779b5ee53dbe9c444339c -e NODE_NAME=urz-ceph-01.rz.uni-jena.de -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=HDD-OSDs -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0:/var/run/ceph:z -v /var/log/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0:/var/log/ceph:z -v /var/lib/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0/selinux:/sys/fs/selinux:ro -v /:/rootfs -v /tmp/ceph-tmpn38s1gi1:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpag6pqr89:/var/lib/ceph/bootstrap-osd/ceph.keyring:z registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:06255c43a5ccaec516969637a39d500a0354da26127779b5ee53dbe9c444339c lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdau /dev/sdav /dev/sdax /dev/sday /dev/sdaz /dev/sdba /dev/sdbb /dev/sdbc /dev/sdbd /dev/sdbe /dev/sdbf /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/sdaf /dev/sdag /dev/sdah /dev/sdc /dev/sdd /dev/sde --block-db-size 128GB --yes --no-systemd /usr/bin/podman: stderr --> passed data devices: 36 physical, 0 LVM /usr/bin/podman: stderr --> relative data size: 1.0 /usr/bin/podman: stderr --> passed block_db devices: 6 physical, 0 LVM /usr/bin/podman: stderr --> 128.00 GB was requested for block_db_size, but only 24.84 GB can be fulfilled Is this RedHat specific? Is this a bug in ceph-volume? It looks like it computes the max size by dividing the capacity of one SSD by the number of HDDs (960 / 36) instead of number of db_slots (960 / 6). I get the same errors when running ceph-volume manually: # cephadm ceph-volume lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdau /dev/sdav /dev/sdax /dev/sday /dev/sdaz /dev/sdba /dev/sdbb /dev/sdbc /dev/sdbd /dev/sdbe /dev/sdbf /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/sdaf /dev/sdag /dev/sdah /dev/sdc /dev/sdd /dev/sde --block-db-slots 6 --yes --no-systemd --report Inferring fsid 06cf7622-0d7c-11ed-936d-88e9a437ebd0 Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 Total OSDs: 36 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- data /dev/sdaa 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdab 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdac 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdan 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdao 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdap 16.37 TB 100.00% block_db /dev/sde 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdaq 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdar 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdas 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdau 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdav 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdax 16.37 TB 100.00% block_db /dev/sdd 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sday 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdaz 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdba 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdbb 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdbc 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdbd 16.37 TB 100.00% block_db /dev/sdc 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdbe 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdbf 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdi 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdj 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdk 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdl 16.37 TB 100.00% block_db /dev/sdah 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdm 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdn 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdo 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdp 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdq 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdr 16.37 TB 100.00% block_db /dev/sdag 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sds 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdt 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdv 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdw 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdx 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% ---------------------------------------------------------------------------------------------------- data /dev/sdy 16.37 TB 100.00% block_db /dev/sdaf 24.84 GB 2.78% # cephadm ceph-volume lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdau /dev/sdav /dev/sdax /dev/sday /dev/sdaz /dev/sdba /dev/sdbb /dev/sdbc /dev/sdbd /dev/sdbe /dev/sdbf /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/sdaf /dev/sdag /dev/sdah /dev/sdc /dev/sdd /dev/sde --block-db-slots 6 --block-db-size 128GB --yes --no-systemd --report Inferring fsid 06cf7622-0d7c-11ed-936d-88e9a437ebd0 Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 -e NODE_NAME=urz-ceph-01.rz.uni-jena.de -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0:/var/run/ceph:z -v /var/log/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0:/var/log/ceph:z -v /var/lib/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/06cf7622-0d7c-11ed-936d-88e9a437ebd0/selinux:/sys/fs/selinux:ro -v /:/rootfs -v /tmp/ceph-tmpdfv_07s3:/etc/ceph/ceph.conf:z registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap /dev/sdaq /dev/sdar /dev/sdas /dev/sdau /dev/sdav /dev/sdax /dev/sday /dev/sdaz /dev/sdba /dev/sdbb /dev/sdbc /dev/sdbd /dev/sdbe /dev/sdbf /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/sdaf /dev/sdag /dev/sdah /dev/sdc /dev/sdd /dev/sde --block-db-slots 6 --block-db-size 128GB --yes --no-systemd --report /usr/bin/podman: stderr --> passed data devices: 36 physical, 0 LVM /usr/bin/podman: stderr --> relative data size: 1.0 /usr/bin/podman: stderr --> passed block_db devices: 6 physical, 0 LVM /usr/bin/podman: stderr --> 128.00 GB was requested for block_db_size, but only 24.84 GB can be fulfilled Running with only 6 HDDs and one SSD yields the desired result: # cephadm ceph-volume lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap --db-devices /dev/sdaf --block-db-slots 6 --yes --no-systemd --report Inferring fsid 06cf7622-0d7c-11ed-936d-88e9a437ebd0 Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 Total OSDs: 6 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- data /dev/sdaa 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% ---------------------------------------------------------------------------------------------------- data /dev/sdab 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% ---------------------------------------------------------------------------------------------------- data /dev/sdac 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% ---------------------------------------------------------------------------------------------------- data /dev/sdan 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% ---------------------------------------------------------------------------------------------------- data /dev/sdao 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% ---------------------------------------------------------------------------------------------------- data /dev/sdap 16.37 TB 100.00% block_db /dev/sdaf 149.04 GB 16.67% # cephadm ceph-volume lvm batch --no-auto /dev/sdaa /dev/sdab /dev/sdac /dev/sdan /dev/sdao /dev/sdap --db-devices /dev/sdaf --block-db-slots 6 --block-db-size 128GB --yes --no-systemd --report Inferring fsid 06cf7622-0d7c-11ed-936d-88e9a437ebd0 Using recent ceph image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:fc25524ccb0ea78526257778ab54bfb1a25772b75fcc97df98eb06a0e67e1bf6 Total OSDs: 6 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- data /dev/sdaa 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% ---------------------------------------------------------------------------------------------------- data /dev/sdab 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% ---------------------------------------------------------------------------------------------------- data /dev/sdac 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% ---------------------------------------------------------------------------------------------------- data /dev/sdan 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% ---------------------------------------------------------------------------------------------------- data /dev/sdao 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% ---------------------------------------------------------------------------------------------------- data /dev/sdap 16.37 TB 100.00% block_db /dev/sdaf 128.00 GB 14.31% Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Hosting http://www.heinlein-support.de Tel: 030-405051-43 Fax: 030-405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx