Hi.Try to look on thread "Disks are filling up even if there is not a single placement group on them" in this mailing list. Maybe you encounter the same problem as me.
Michal On 11/20/23 08:56, Debian wrote:
Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9602859008 bluefs.db_used_bytes: 469434368 ceph daemon osd.149 config show "bluestore_bitmapallocator_span_size": "1024", "bluestore_block_db_size": "0", "bluestore_block_size": "107374182400", "bluestore_block_wal_size": "100663296", "bluestore_cache_size": "0", "bluestore_cache_size_hdd": "1073741824", "bluestore_cache_size_ssd": "3221225472", "bluestore_compression_max_blob_size": "0", "bluestore_compression_max_blob_size_hdd": "524288", "bluestore_compression_max_blob_size_ssd": "65536", "bluestore_compression_min_blob_size": "0", "bluestore_compression_min_blob_size_hdd": "131072", "bluestore_compression_min_blob_size_ssd": "8192", "bluestore_extent_map_inline_shard_prealloc_size": "256", "bluestore_extent_map_shard_max_size": "1200", "bluestore_extent_map_shard_min_size": "150", "bluestore_extent_map_shard_target_size": "500", "bluestore_extent_map_shard_target_size_slop": "0.200000", "bluestore_max_alloc_size": "0", "bluestore_max_blob_size": "0", "bluestore_max_blob_size_hdd": "524288", "bluestore_max_blob_size_ssd": "65536", "bluestore_min_alloc_size": "0", "bluestore_min_alloc_size_hdd": "65536", "bluestore_min_alloc_size_ssd": "4096", "bluestore_prefer_deferred_size": "0", "bluestore_prefer_deferred_size_hdd": "32768", "bluestore_prefer_deferred_size_ssd": "0","bluestore_rocksdb_options": "compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2","bluefs_alloc_size": "1048576", "bluefs_allocator": "hybrid", "bluefs_buffered_io": "false", "bluefs_check_for_zeros": "false", "bluefs_compact_log_sync": "false", "bluefs_log_compact_min_ratio": "5.000000", "bluefs_log_compact_min_size": "16777216", "bluefs_max_log_runway": "4194304", "bluefs_max_prefetch": "1048576", "bluefs_min_flush_size": "524288", "bluefs_min_log_runway": "1048576", "bluefs_preextend_wal_files": "false", "bluefs_replay_recovery": "false", "bluefs_replay_recovery_disable_compact": "false", "bluefs_shared_alloc_size": "65536", "bluefs_sync_write": "false",which the osd performance counter i cannot determine who is using the memory,...thx & best regards On 18.11.23 09:05, Eugen Block wrote:Do you have a large block.db size defined in the ceph.conf (or config store)?Zitat von Debian <debian@xxxxxxxxxx>:thx for your reply, it shows nothing,... there are no pgs on the osd,... best regards On 17.11.23 23:09, Eugen Block wrote:After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you which PGs are created there and then you’ll know which pool they belong to, then check again the crush rule for that pool. You can paste the outputs here.Zitat von Debian <debian@xxxxxxxxxx>:Hi,after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full:ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable)ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 158 ssd 0.21999 1.00000 224 GiB 194 GiB 193 GiB 22 MiB 1002 MiB 30 GiB 86.68 1.49 0 up osd.158inferring bluefs devices from bluestore path1 : device size 0x37e4400000 : own 0x[1ad3f00000~23c600000] = 0x23c600000 : using 0x39630000(918 MiB) : bluestore has 0x46e2d0000(18 GiB) availablewhen i recreate the osd the osd gets full again any suggestion? thx & best regards _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
-- Michal Strnad Oddeleni datovych ulozist CESNET z.s.p.o.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx