Mark, I ran into something similar to this recently while testing Quincy... I believe I see what happened here. Based on the users information, the following non-default option was in use: ceph config set osd bluestore_rocksdb_options compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB This overriden option does not have the cap for WAL sizing, which is needed since the column family sharding was added to RocksDB: https://github.com/ceph/ceph/pull/35277 Without that option specific as part of bluestore_rocksdb_options, Quincy will use ~100GiB WALs. Everything works great until the WALs fill, and then the cluster begins caving in on itself progressively. Cheers, Tyler _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx