Hi everyone How can we display the true osd block size? I get 64K for a hdd osd: ceph daemon osd.0 config show | egrep --color=always "alloc_size|bdev_block_size" "bdev_block_size": "4096", "bluefs_alloc_size": "1048576", "bluefs_shared_alloc_size": "65536", "bluestore_extent_map_inline_shard_prealloc_size": "256", "bluestore_max_alloc_size": "0", "bluestore_min_alloc_size": "0", "bluestore_min_alloc_size_hdd": "65536", "bluestore_min_alloc_size_ssd": "16384", But it was explained that bluestore_min_alloc_size_hdd is only affecting newly created osd's. So to check the current block size I can check the osd metadata and find 4K: ceph osd metadata osd.0 | jq '.bluestore_bdev_block_size' "bluestore_bdev_block_size": "4096", Checking an object block size directly also shows 4K: ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 6.5s4 "cb1594b3-a782-49d0-a19f-68cd48870a63.95398870.14_DriveE/Predator/Doc/2021/03/01101038/1523111.pdf.zip" dump | jq '.stat' { "size": 32768, "blksize": 4096, "blocks": 8, "nlink": 1 } So these hdd osd's were created with 4K block size without honoring bluestore_min_alloc_size_hdd? The osd's are running on nautilus 14.2.5 and were created on luminous. Newer nvme osd's created on nautilus were also created with 4K without honoring bluestore_min_alloc_size_ssd (16K). This is confusing... Actually I would be happy with 4K as it is recommended to avoid over-allocation issue with EC pools. But I would like to understand how to show the true block size of an existing osd... Many thanks for your help! ;-) Cheers Francois Scheurer -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheurer@xxxxxxxxxxxx web: http://www.everyware.ch
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx