Thanks for your response, Josh. Our ceph.conf doesn't have anything but the mon addresses, modern Ceph versions store their configuration in the monitor configuration database. This works rather well for various Ceph components, including the monitors. RocksDB options are also applied to monitors correctly, but for some reason are being ignored. /Z On Sat, 14 Oct 2023, 17:40 Josh Baergen, <jbaergen@xxxxxxxxxxxxxxxx> wrote: > Apologies if you tried this already and I missed it - have you tried > configuring that setting in /etc/ceph/ceph.conf (or wherever your conf > file is) instead of via 'ceph config'? I wonder if mon settings like > this one won't actually apply the way you want because they're needed > before the mon has the ability to obtain configuration from, > effectively, itself. > > Josh > > On Sat, Oct 14, 2023 at 1:32 AM Zakhar Kirpichenko <zakhar@xxxxxxxxx> > wrote: > > > > I also tried setting RocksDB compression options and deploying a new > > monitor. The monitor started with no RocksDB compression again. > > > > Ceph monitors seem to ignore mon_rocksdb_options set at runtime, at mon > > start and at mon deploy. How can I enable RocksDB compression in Ceph > > monitors? > > > > Any input from anyone, please? > > > > /Z > > > > On Fri, 13 Oct 2023 at 23:01, Zakhar Kirpichenko <zakhar@xxxxxxxxx> > wrote: > > > > > Hi, > > > > > > I'm still trying to fight large Ceph monitor writes. One option I > > > considered is enabling RocksDB compression, as our nodes have more than > > > sufficient RAM and CPU. Unfortunately, monitors seem to completely > ignore > > > the compression setting: > > > > > > I tried: > > > > > > - setting ceph config set mon.ceph05 mon_rocksdb_options > > > > "write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true", > > > restarting the test monitor. The monitor started with no RocksDB > > > compression: > > > > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: Compression > > > algorithms supported: > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kZSTDNotFinalCompression supported: 0 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kXpressCompression supported: 0 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kLZ4HCCompression supported: 1 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kLZ4Compression supported: 1 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kBZip2Compression supported: 0 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kZlibCompression supported: 1 > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > kSnappyCompression supported: 1 > > > ... > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > Options.compression: NoCompression > > > debug 2023-10-13T19:47:00.403+0000 7f1cd967a880 4 rocksdb: > > > Options.bottommost_compression: Disabled > > > > > > - setting ceph config set mon mon_rocksdb_options > > > > "write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true", > > > restarting the test monitor. The monitor started with no RocksDB > > > compression, the same way as above. > > > > > > In each case config options were correctly set and readable with config > > > get. I also found a suggestion in ceph-users ( > > > > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/KJM232IHN7FKYI5LODUREN7SVO45BL42/ > ) > > > to set compression in a similar manner. Unfortunately, these options > appear > > > to be ignored. > > > > > > How can I enable RocksDB compression in Ceph monitors? > > > > > > I would very much appreciate your advices and comments. > > > > > > Best regards, > > > Zakhar > > > > > > > > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx