Hi, I'm still trying to fight large Ceph monitor writes. One option I considered is enabling RocksDB compression, as our nodes have more than sufficient RAM and CPU. Unfortunately, monitors seem to completely ignore the compression setting: I tried: - setting ceph config set mon.ceph05 mon_rocksdb_options "write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true", restarting the test monitor. The monitor started with no RocksDB compression: debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: Compression algorithms supported: debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kZSTDNotFinalCompression supported: 0 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kXpressCompression supported: 0 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kLZ4HCCompression supported: 1 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kLZ4Compression supported: 1 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kBZip2Compression supported: 0 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kZlibCompression supported: 1 debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: kSnappyCompression supported: 1 ... debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: Options.compression: NoCompression debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: Options.bottommost_compression: Disabled - setting ceph config set mon mon_rocksdb_options "write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true", restarting the test monitor. The monitor started with no RocksDB compression, the same way as above. In each case config options were correctly set and readable with config get. I also found a suggestion in ceph-users ( https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/KJM232IHN7FKYI5LODUREN7SVO45BL42/) to set compression in a similar manner. Unfortunately, these options appear to be ignored. How can I enable RocksDB compression in Ceph monitors? I would very much appreciate your advices and comments. Best regards, Zakhar _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx