Re: After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

The output for "journalctl -u ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service" is below:

Dec 05 15:30:08 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Scheduled restart job, restart>
Dec 05 15:30:08 node01 systemd[1]: Stopped Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 systemd[1]: Started Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 set uid:gid to 167:167 (ceph:ceph)
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 ceph version 17.2.7 (b12291d110049b2f35e3>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 pidfile_write: ignore empty --pid-file
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  0 load: jerasure load: lrc
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: RocksDB version: 6.15.5
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compile date Oct 25 2023
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB SUMMARY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB Session ID:  I4Q3Q1EPAK1UE4HA>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: CURRENT file:  CURRENT
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: IDENTITY file:  IDENTITY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: MANIFEST file:  MANIFEST-000271 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: SST files in /var/lib/ceph/mon/c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Write Ahead Log file in /var/lib>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_tm
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_cleaner
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_lba
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_cache
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_journal
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_device
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 alienstore
Dec 05 15:29:58 node01 bash[6847]:    1/ 5 mclock
Dec 05 15:29:58 node01 bash[6847]:    1/ 5 ceph_exporter
Dec 05 15:29:58 node01 bash[6847]:   -2/-2 (syslog threshold)
Dec 05 15:29:58 node01 bash[6847]:   99/99 (stderr threshold)
Dec 05 15:29:58 node01 bash[6847]: --- pthread ID / name mapping for recent threads ---
Dec 05 15:29:58 node01 bash[6847]:   7f7ef0736700 / ceph-mon
Dec 05 15:29:58 node01 bash[6847]:   7f7ef9943700 / admin_socket
Dec 05 15:29:58 node01 bash[6847]:   7f7f004e4b80 / ceph-mon
Dec 05 15:29:58 node01 bash[6847]:   max_recent     10000
Dec 05 15:29:58 node01 bash[6847]:   max_new        10000
Dec 05 15:29:58 node01 bash[6847]:   log_file /var/lib/ceph/crash/2023-12-05T13:29:58.387815Z_29b1b3ed-be64-4235-a2f2-78fa83068>
Dec 05 15:29:58 node01 bash[6847]: --- end dump of recent events ---
Dec 05 15:29:58 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Main process exited, code=exit>
Dec 05 15:29:58 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Failed with result 'exit-code'.
Dec 05 15:30:08 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Scheduled restart job, restart>
Dec 05 15:30:08 node01 systemd[1]: Stopped Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 systemd[1]: Started Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 set uid:gid to 167:167 (ceph:ceph)
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 ceph version 17.2.7 (b12291d110049b2f35e3>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 pidfile_write: ignore empty --pid-file
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  0 load: jerasure load: lrc
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: RocksDB version: 6.15.5
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compile date Oct 25 2023
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB SUMMARY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB Session ID:  I4Q3Q1EPAK1UE4HA>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: CURRENT file:  CURRENT
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: IDENTITY file:  IDENTITY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: MANIFEST file:  MANIFEST-000271 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: SST files in /var/lib/ceph/mon/c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Write Ahead Log file in /var/lib>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_tm
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_cleaner
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_lba
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_cache
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_journal
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 seastore_device
Dec 05 15:29:58 node01 bash[6847]:    0/ 5 alienstore
Dec 05 15:29:58 node01 bash[6847]:    1/ 5 mclock
Dec 05 15:29:58 node01 bash[6847]:    1/ 5 ceph_exporter
Dec 05 15:29:58 node01 bash[6847]:   -2/-2 (syslog threshold)
Dec 05 15:29:58 node01 bash[6847]:   99/99 (stderr threshold)
Dec 05 15:29:58 node01 bash[6847]: --- pthread ID / name mapping for recent threads ---
Dec 05 15:29:58 node01 bash[6847]:   7f7ef0736700 / ceph-mon
Dec 05 15:29:58 node01 bash[6847]:   7f7ef9943700 / admin_socket
Dec 05 15:29:58 node01 bash[6847]:   7f7f004e4b80 / ceph-mon
Dec 05 15:29:58 node01 bash[6847]:   max_recent     10000
Dec 05 15:29:58 node01 bash[6847]:   max_new        10000
Dec 05 15:29:58 node01 bash[6847]:   log_file /var/lib/ceph/crash/2023-12-05T13:29:58.387815Z_29b1b3ed-be64-4235-a2f2-78fa83068>
Dec 05 15:29:58 node01 bash[6847]: --- end dump of recent events ---
Dec 05 15:29:58 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Main process exited, code=exit>
Dec 05 15:29:58 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Failed with result 'exit-code'.
Dec 05 15:30:08 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Scheduled restart job, restart>
Dec 05 15:30:08 node01 systemd[1]: Stopped Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 systemd[1]: Started Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 set uid:gid to 167:167 (ceph:ceph)
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 ceph version 17.2.7 (b12291d110049b2f35e3>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 pidfile_write: ignore empty --pid-file
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  0 load: jerasure load: lrc
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: RocksDB version: 6.15.5
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compile date Oct 25 2023
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB SUMMARY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB Session ID:  I4Q3Q1EPAK1UE4HA>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: CURRENT file:  CURRENT
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: IDENTITY file:  IDENTITY
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: MANIFEST file:  MANIFEST-000271 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: SST files in /var/lib/ceph/mon/c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Write Ahead Log file in /var/lib>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.cr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                               Op>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                                 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                                 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                                O>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.max_file_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                              Opt>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                               Op>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.ma>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.max_man>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.log_fi>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.ke>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.recyc>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.a>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.al>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.u>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.u>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.create_missing_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                              Opt>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                                 >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.table_cac>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         Options.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.WA>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.manifest_pre>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                     Options.is_f>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.advise>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.db_wr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.write>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.access_hint_on_c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.new_table_reader_for_co>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.random_access_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Options.use>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                            Optio>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.sst_file_manager.rat>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.wa>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.enable_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.enable_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.unorder>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.allow_concurrent>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.enable_write_thread>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.write_thread>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.write_thread_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                               Op>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                              Opt>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoid_flush_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allow_ingest>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.preserve_del>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.two_write_qu>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.manual_wal_f>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.atomic_flush>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoid_unnece>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.persist_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.write_db>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.log_read>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.file_che>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.best_eff>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.max_bgerr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.bgerror_resum>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allow_data_i>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.db_host_id: >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_backgrou>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_backgrou>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_subcompa>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoid_flush_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.writable_file_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.delayed_writ>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_total_wa>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.delete_obsol>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.stats_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.stats_pe>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.stats_hi>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                          Options>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                          Options>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Options.wal>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.strict>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.compaction_readahe>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.max_bac>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compression algorithms supported:
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTDNotFinalCompression>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTD supported: 0
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kXpressCompression suppo>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4HCCompression suppor>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZlibCompression support>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kSnappyCompression suppo>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4Compression supporte>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kBZip2Compression suppor>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Fast CRC32 supported: Supported >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4724] Recover>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/column_family.cc:595] ------>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.comparator>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.merge_operator:
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compaction_filter>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compaction_filter>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.sst_partitioner_factory>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.memtable_factory>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.table_factory>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            table_factory options>
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks: 1
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks_with_high_priority: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_l0_filter_and_index_blocks_in_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_top_level_index_and_filter: 1
Dec 05 15:30:08 node01 bash[7017]:   index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   data_block_index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   index_shortening: 1
Dec 05 15:30:08 node01 bash[7017]:   data_block_hash_table_util_ratio: 0.750000
Dec 05 15:30:08 node01 bash[7017]:   hash_index_allow_collision: 1
Dec 05 15:30:08 node01 bash[7017]:   checksum: 1
Dec 05 15:30:08 node01 bash[7017]:   no_block_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   block_cache: 0x562461f27090
Dec 05 15:30:08 node01 bash[7017]:   block_cache_name: BinnedLRUCache
Dec 05 15:30:08 node01 bash[7017]:   block_cache_options:
Dec 05 15:30:08 node01 bash[7017]:     capacity : 536870912
Dec 05 15:30:08 node01 bash[7017]:     num_shard_bits : 4
Dec 05 15:30:08 node01 bash[7017]:     strict_capacity_limit : 0
Dec 05 15:30:08 node01 bash[7017]:     high_pri_pool_ratio: 0.000
Dec 05 15:30:08 node01 bash[7017]:   block_cache_compressed: (nil)
Dec 05 15:30:08 node01 bash[7017]:   persistent_cache: (nil)
Dec 05 15:30:08 node01 bash[7017]:   block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   block_size_deviation: 10
Dec 05 15:30:08 node01 bash[7017]:   block_restart_interval: 16
Dec 05 15:30:08 node01 bash[7017]:   index_block_restart_interval: 1
Dec 05 15:30:08 node01 bash[7017]:   metadata_block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   partition_filters: 0
Dec 05 15:30:08 node01 bash[7017]:   use_delta_encoding: 1
Dec 05 15:30:08 node01 bash[7017]:   filter_policy: rocksdb.BuiltinBloomFilter
Dec 05 15:30:08 node01 bash[7017]:   whole_key_filtering: 1
Dec 05 15:30:08 node01 bash[7017]:   verify_compression: 0
Dec 05 15:30:08 node01 bash[7017]:   read_amp_bytes_per_bit: 0
Dec 05 15:30:08 node01 bash[7017]:   format_version: 4
Dec 05 15:30:08 node01 bash[7017]:   enable_index_compression: 1
Dec 05 15:30:08 node01 bash[7017]:   block_align: 0
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.write_buffer_size>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.max_write_buffer_number>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.compression: No>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.bottomm>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.prefix_extractor: >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_insert_with_h>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.num_levels: 7
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.min_write_buffer_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_buffer_num>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_buffer_siz>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.bottommost_co>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.bottomm>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.bottommost>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommost_compr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommost_compr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommost_compr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.bottomm>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.compression_o>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.compres>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.compressio>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compression_opts>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compression_opts>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compression_opts>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options.compres>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.level0_file_num_com>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.level0_slowdown>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:              Options.level0_stop>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.target>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.target_file_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.max_bytes>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.level_compaction_dynamic>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.max_bytes_for_l>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_level_mult>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.max_sequential_ski>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.max_c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.a>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.soft_pending_compactio>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.hard_pending_compactio>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.rate_limit_delay_m>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.disable_a>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.c>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                          Options>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_unive>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_fifo.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_options_fifo.>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.table_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Options.inplac>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.inplace_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.memtable_p>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.memtable_w>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_huge_page_siz>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                           Option>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.max_s>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.optimize_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.paranoid_>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.force_con>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.report_bg>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                               Op>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.periodic_compac>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Options.enabl>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Options.bl>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.blob_comp>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.enable_blob_garbag>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.blob_garbage_collectio>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457] More ex>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4764] Recover>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4779] Column >
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082] Creatin>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457] More ex>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micros": 170>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open.cc:845]>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082] Creatin>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457] More ex>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micros": 170>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [file/delete_scheduler.cc:69] De>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open.cc:1700>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: DB pointer 0x562461f90000
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:901] ---->
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:903]
Dec 05 15:30:08 node01 bash[7017]: ** DB Stats **
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0>
Dec 05 15:30:08 node01 bash[7017]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.0>
Dec 05 15:30:08 node01 bash[7017]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 starting mon.node01 rank 0 at public addr>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  1 mon.node01@-1(???) e0 preinit fsid be4304>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crush map ha>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crush map ha>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crush map ha>
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crush map ha>
Dec 05 15:30:08 node01 bash[7017]: terminate called after throwing an instance of 'std::invalid_argument'
Dec 05 15:30:08 node01 bash[7017]:   what():  stoull
Dec 05 15:30:08 node01 bash[7017]: *** Caught signal (Aborted) **
Dec 05 15:30:08 node01 bash[7017]:  in thread 7f00d16deb80 thread_name:ceph-mon
Dec 05 15:30:08 node01 bash[7017]:  ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Dec 05 15:30:08 node01 bash[7017]:  1: /lib64/libpthread.so.0(+0x12cf0) [0x7f00ceb4bcf0]
Dec 05 15:30:08 node01 bash[7017]:  2: gsignal()
Dec 05 15:30:08 node01 bash[7017]:  3: abort()
Dec 05 15:30:08 node01 bash[7017]:  4: /lib64/libstdc++.so.6(+0x9009b) [0x7f00ce15d09b]
Dec 05 15:30:08 node01 bash[7017]:  5: /lib64/libstdc++.so.6(+0x9654c) [0x7f00ce16354c]
Dec 05 15:30:08 node01 bash[7017]:  6: /lib64/libstdc++.so.6(+0x965a7) [0x7f00ce1635a7]
Dec 05 15:30:08 node01 bash[7017]:  7: /lib64/libstdc++.so.6(+0x96808) [0x7f00ce163808]
Dec 05 15:30:08 node01 bash[7017]:  8: /lib64/libstdc++.so.6(+0x91ff9) [0x7f00ce15eff9]
Dec 05 15:30:08 node01 bash[7017]:  9: (LogMonitor::log_external_backlog()+0xe70) [0x5624609dc190]
Dec 05 15:30:08 node01 bash[7017]:  10: (LogMonitor::update_from_paxos(bool*)+0x54) [0x5624609dc334]
Dec 05 15:30:08 node01 bash[7017]:  11: (Monitor::refresh_from_paxos(bool*)+0x104) [0x56246094fe74]
Dec 05 15:30:08 node01 bash[7017]:  12: (Monitor::preinit()+0x95d) [0x56246097e3ad]
Dec 05 15:30:08 node01 bash[7017]:  13: main()
Dec 05 15:30:08 node01 bash[7017]:  14: __libc_start_main()
Dec 05 15:30:08 node01 bash[7017]:  15: _start()
Dec 05 15:30:08 node01 bash[7017]: debug 2023-12-05T13:30:08.879+0000 7f00d16deb80 -1 *** Caught signal (Aborted) **
Dec 05 15:30:08 node01 bash[7017]:  in thread 7f00d16deb80 thread_name:ceph-mon
Dec 05 15:30:08 node01 bash[7017]:  ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Dec 05 15:30:08 node01 bash[7017]:  1: /lib64/libpthread.so.0(+0x12cf0) [0x7f00ceb4bcf0]
Dec 05 15:30:08 node01 bash[7017]:  2: gsignal()
Dec 05 15:30:08 node01 bash[7017]:  3: abort()
Dec 05 15:30:08 node01 bash[7017]:  4: /lib64/libstdc++.so.6(+0x9009b) [0x7f00ce15d09b]
Dec 05 15:30:08 node01 bash[7017]:  5: /lib64/libstdc++.so.6(+0x9654c) [0x7f00ce16354c]
Dec 05 15:30:08 node01 bash[7017]:  6: /lib64/libstdc++.so.6(+0x965a7) [0x7f00ce1635a7]
Dec 05 15:30:08 node01 bash[7017]:  7: /lib64/libstdc++.so.6(+0x96808) [0x7f00ce163808]
Dec 05 15:30:08 node01 bash[7017]:  8: /lib64/libstdc++.so.6(+0x91ff9) [0x7f00ce15eff9]
Dec 05 15:30:08 node01 bash[7017]:  9: (LogMonitor::log_external_backlog()+0xe70) [0x5624609dc190]
Dec 05 15:30:08 node01 bash[7017]:  10: (LogMonitor::update_from_paxos(bool*)+0x54) [0x5624609dc334]
Dec 05 15:30:08 node01 bash[7017]:  11: (Monitor::refresh_from_paxos(bool*)+0x104) [0x56246094fe74]
Dec 05 15:30:08 node01 bash[7017]:  12: (Monitor::preinit()+0x95d) [0x56246097e3ad]
Dec 05 15:30:08 node01 bash[7017]:  13: main()
Dec 05 15:30:08 node01 bash[7017]:  14: __libc_start_main()
Dec 05 15:30:08 node01 bash[7017]:  15: _start()
Dec 05 15:30:08 node01 bash[7017]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Dec 05 15:30:08 node01 bash[7017]: --- begin dump of recent events ---
Dec 05 15:30:08 node01 bash[7017]: debug   -290> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -289> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -288> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -287> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -286> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -285> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -284> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -283> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -282> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -281> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -280> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -279> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -278> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -277> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -276> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -275> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -274> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -273> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -272> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -271> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -270> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -269> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -268> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -267> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -266> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 set uid:gid to 167:167 (ceph:ceph)
Dec 05 15:30:08 node01 bash[7017]: debug   -265> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 ceph version 17.2.7 (b12291d11004>
Dec 05 15:30:08 node01 bash[7017]: debug   -264> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 pidfile_write: ignore empty --pid>
Dec 05 15:30:08 node01 bash[7017]: debug   -263> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) init /var/ru>
Dec 05 15:30:08 node01 bash[7017]: debug   -262> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) bind_and_lis>
Dec 05 15:30:08 node01 bash[7017]: debug   -261> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -260> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -259> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -258> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -257> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -256> 2023-12-05T13:30:08.855+0000 7f00cab3d700  5 asok(0x56246214a000) entry start
Dec 05 15:30:08 node01 bash[7017]: debug   -255> 2023-12-05T13:30:08.859+0000 7f00d16deb80  0 load: jerasure load: lrc
Dec 05 15:30:08 node01 bash[7017]: debug   -254> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option level_compact>
Dec 05 15:30:08 node01 bash[7017]: debug   -253> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option write_buffer_>
Dec 05 15:30:08 node01 bash[7017]: debug   -252> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option compression =>
Dec 05 15:30:08 node01 bash[7017]: debug   -251> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option level_compact>
Dec 05 15:30:08 node01 bash[7017]: debug   -250> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option write_buffer_>
Dec 05 15:30:08 node01 bash[7017]: debug   -249> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option compression =>
Dec 05 15:30:08 node01 bash[7017]: debug   -248> 2023-12-05T13:30:08.859+0000 7f00d16deb80  5 rocksdb: verify_sharding column f>
Dec 05 15:30:08 node01 bash[7017]: debug   -247> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: RocksDB version: 6.15.5
Dec 05 15:30:08 node01 bash[7017]: debug   -246> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Git sha rocksdb_build_gi>
Dec 05 15:30:08 node01 bash[7017]: debug   -245> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compile date Oct 25 2023
Dec 05 15:30:08 node01 bash[7017]: debug   -244> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB SUMMARY
Dec 05 15:30:08 node01 bash[7017]: debug   -243> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB Session ID:  I4Q3Q1EP>
Dec 05 15:30:08 node01 bash[7017]: debug   -242> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: CURRENT file:  CURRENT
Dec 05 15:30:08 node01 bash[7017]: debug   -241> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: IDENTITY file:  IDENTITY
Dec 05 15:30:08 node01 bash[7017]: debug   -240> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: MANIFEST file:  MANIFEST>
Dec 05 15:30:08 node01 bash[7017]: debug   -239> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: SST files in /var/lib/ce>
Dec 05 15:30:08 node01 bash[7017]: debug   -238> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Write Ahead Log file in >
Dec 05 15:30:08 node01 bash[7017]: debug   -237> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -236> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -235> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -234> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -233> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -232> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -231> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -230> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -229> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -228> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -227> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -226> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -225> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -224> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -223> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -222> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -221> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -220> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -219> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -218> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -217> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.create_>
Dec 05 15:30:08 node01 bash[7017]: debug   -216> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -215> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -214> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.t>
Dec 05 15:30:08 node01 bash[7017]: debug   -213> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -212> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -211> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -210> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.mani>
Dec 05 15:30:08 node01 bash[7017]: debug   -209> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                     Opti>
Dec 05 15:30:08 node01 bash[7017]: debug   -208> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -207> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -206> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -205> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.access_h>
Dec 05 15:30:08 node01 bash[7017]: debug   -204> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.new_table_reade>
Dec 05 15:30:08 node01 bash[7017]: debug   -203> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.random>
Dec 05 15:30:08 node01 bash[7017]: debug   -202> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Opt>
Dec 05 15:30:08 node01 bash[7017]: debug   -201> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -200> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.sst_file_man>
Dec 05 15:30:08 node01 bash[7017]: debug   -199> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -198> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -197> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -196> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -195> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.allow_co>
Dec 05 15:30:08 node01 bash[7017]: debug   -194> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.enable_writ>
Dec 05 15:30:08 node01 bash[7017]: debug   -193> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.writ>
Dec 05 15:30:08 node01 bash[7017]: debug   -192> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.write>
Dec 05 15:30:08 node01 bash[7017]: debug   -191> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -190> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -189> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -188> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allo>
Dec 05 15:30:08 node01 bash[7017]: debug   -187> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.pres>
Dec 05 15:30:08 node01 bash[7017]: debug   -186> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.two_>
Dec 05 15:30:08 node01 bash[7017]: debug   -185> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.manu>
Dec 05 15:30:08 node01 bash[7017]: debug   -184> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.atom>
Dec 05 15:30:08 node01 bash[7017]: debug   -183> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -182> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -181> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -180> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -179> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -178> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -177> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -176> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.bgerr>
Dec 05 15:30:08 node01 bash[7017]: debug   -175> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allo>
Dec 05 15:30:08 node01 bash[7017]: debug   -174> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.db_h>
Dec 05 15:30:08 node01 bash[7017]: debug   -173> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -172> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -171> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -170> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -169> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.writab>
Dec 05 15:30:08 node01 bash[7017]: debug   -168> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.dela>
Dec 05 15:30:08 node01 bash[7017]: debug   -167> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -166> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.dele>
Dec 05 15:30:08 node01 bash[7017]: debug   -165> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -164> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -163> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -162> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -161> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -160> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Opt>
Dec 05 15:30:08 node01 bash[7017]: debug   -159> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -158> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.compaction>
Dec 05 15:30:08 node01 bash[7017]: debug   -157> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -156> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compression algorithms s>
Dec 05 15:30:08 node01 bash[7017]: debug   -155> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTDNotFinalCom>
Dec 05 15:30:08 node01 bash[7017]: debug   -154> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTD supported:>
Dec 05 15:30:08 node01 bash[7017]: debug   -153> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kXpressCompressi>
Dec 05 15:30:08 node01 bash[7017]: debug   -152> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4HCCompressio>
Dec 05 15:30:08 node01 bash[7017]: debug   -151> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZlibCompression>
Dec 05 15:30:08 node01 bash[7017]: debug   -150> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kSnappyCompressi>
Dec 05 15:30:08 node01 bash[7017]: debug   -149> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4Compression >
Dec 05 15:30:08 node01 bash[7017]: debug   -148> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kBZip2Compressio>
Dec 05 15:30:08 node01 bash[7017]: debug   -147> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Fast CRC32 supported: Su>
Dec 05 15:30:08 node01 bash[7017]: debug   -146> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4724]>
Dec 05 15:30:08 node01 bash[7017]: debug   -145> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/column_family.cc:595>
Dec 05 15:30:08 node01 bash[7017]: debug   -144> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.co>
Dec 05 15:30:08 node01 bash[7017]: debug   -143> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.merge_>
Dec 05 15:30:08 node01 bash[7017]: debug   -142> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compactio>
Dec 05 15:30:08 node01 bash[7017]: debug   -141> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compactio>
Dec 05 15:30:08 node01 bash[7017]: debug   -140> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.sst_partitioner>
Dec 05 15:30:08 node01 bash[7017]: debug   -139> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.memtable>
Dec 05 15:30:08 node01 bash[7017]: debug   -138> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.table>
Dec 05 15:30:08 node01 bash[7017]: debug   -137> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            table_factory>
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks: 1
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks_with_high_priority: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_l0_filter_and_index_blocks_in_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_top_level_index_and_filter: 1
Dec 05 15:30:08 node01 bash[7017]:   index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   data_block_index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   index_shortening: 1
Dec 05 15:30:08 node01 bash[7017]:   data_block_hash_table_util_ratio: 0.750000
Dec 05 15:30:08 node01 bash[7017]:   hash_index_allow_collision: 1
Dec 05 15:30:08 node01 bash[7017]:   checksum: 1
Dec 05 15:30:08 node01 bash[7017]:   no_block_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   block_cache: 0x562461f27090
Dec 05 15:30:08 node01 bash[7017]:   block_cache_name: BinnedLRUCache
Dec 05 15:30:08 node01 bash[7017]:   block_cache_options:
Dec 05 15:30:08 node01 bash[7017]:     capacity : 536870912
Dec 05 15:30:08 node01 bash[7017]:     num_shard_bits : 4
Dec 05 15:30:08 node01 bash[7017]:     strict_capacity_limit : 0
Dec 05 15:30:08 node01 bash[7017]:     high_pri_pool_ratio: 0.000
Dec 05 15:30:08 node01 bash[7017]:   block_cache_compressed: (nil)
Dec 05 15:30:08 node01 bash[7017]:   persistent_cache: (nil)
Dec 05 15:30:08 node01 bash[7017]:   block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   block_size_deviation: 10
Dec 05 15:30:08 node01 bash[7017]:   block_restart_interval: 16
Dec 05 15:30:08 node01 bash[7017]:   index_block_restart_interval: 1
Dec 05 15:30:08 node01 bash[7017]:   metadata_block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   partition_filters: 0
Dec 05 15:30:08 node01 bash[7017]:   use_delta_encoding: 1
Dec 05 15:30:08 node01 bash[7017]:   filter_policy: rocksdb.BuiltinBloomFilter
Dec 05 15:30:08 node01 bash[7017]:   whole_key_filtering: 1
Dec 05 15:30:08 node01 bash[7017]:   verify_compression: 0
Dec 05 15:30:08 node01 bash[7017]:   read_amp_bytes_per_bit: 0
Dec 05 15:30:08 node01 bash[7017]:   format_version: 4
Dec 05 15:30:08 node01 bash[7017]:   enable_index_compression: 1
Dec 05 15:30:08 node01 bash[7017]:   block_align: 0
Dec 05 15:30:08 node01 bash[7017]: debug   -136> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.write_buf>
Dec 05 15:30:08 node01 bash[7017]: debug   -135> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.max_write_buffe>
Dec 05 15:30:08 node01 bash[7017]: debug   -134> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.compres>
Dec 05 15:30:08 node01 bash[7017]: debug   -133> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -132> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.prefix_ext>
Dec 05 15:30:08 node01 bash[7017]: debug   -131> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_inser>
Dec 05 15:30:08 node01 bash[7017]: debug   -130> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.num_>
Dec 05 15:30:08 node01 bash[7017]: debug   -129> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.min_write>
Dec 05 15:30:08 node01 bash[7017]: debug   -128> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_bu>
Dec 05 15:30:08 node01 bash[7017]: debug   -127> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_bu>
Dec 05 15:30:08 node01 bash[7017]: debug   -126> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.botto>
Dec 05 15:30:08 node01 bash[7017]: debug   -125> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -124> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.bo>
Dec 05 15:30:08 node01 bash[7017]: debug   -123> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -122> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -121> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -120> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -119> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.compr>
Dec 05 15:30:08 node01 bash[7017]: debug   -118> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -117> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.co>
Dec 05 15:30:08 node01 bash[7017]: debug   -116> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -115> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -114> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -113> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -112> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.level0_file>
Dec 05 15:30:08 node01 bash[7017]: debug   -111> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.level0_>
Dec 05 15:30:08 node01 bash[7017]: debug   -110> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:              Options.lev>
Dec 05 15:30:08 node01 bash[7017]: debug   -109> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -108> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.targ>
Dec 05 15:30:08 node01 bash[7017]: debug   -107> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -106> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.level_compaction>
Dec 05 15:30:08 node01 bash[7017]: debug   -105> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.max_byt>
Dec 05 15:30:08 node01 bash[7017]: debug   -104> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -103> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -102> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -101> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -100> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -99> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -98> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -97> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.max_sequen>
Dec 05 15:30:08 node01 bash[7017]: debug    -96> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -95> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -94> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.soft_pending_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -93> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.hard_pending_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -92> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.rate_limit>
Dec 05 15:30:08 node01 bash[7017]: debug    -91> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.d>
Dec 05 15:30:08 node01 bash[7017]: debug    -90> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -89> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -88> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -87> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -86> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -85> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -84> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -83> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -82> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -81> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -80> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug    -79> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug    -78> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug    -77> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.me>
Dec 05 15:30:08 node01 bash[7017]: debug    -76> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.me>
Dec 05 15:30:08 node01 bash[7017]: debug    -75> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_huge_>
Dec 05 15:30:08 node01 bash[7017]: debug    -74> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -73> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -72> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.o>
Dec 05 15:30:08 node01 bash[7017]: debug    -71> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.p>
Dec 05 15:30:08 node01 bash[7017]: debug    -70> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.f>
Dec 05 15:30:08 node01 bash[7017]: debug    -69> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.r>
Dec 05 15:30:08 node01 bash[7017]: debug    -68> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -67> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.periodi>
Dec 05 15:30:08 node01 bash[7017]: debug    -66> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -65> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -64> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug    -63> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.b>
Dec 05 15:30:08 node01 bash[7017]: debug    -62> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.enable_blo>
Dec 05 15:30:08 node01 bash[7017]: debug    -61> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.blob_garbage_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -60> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -59> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4764]>
Dec 05 15:30:08 node01 bash[7017]: debug    -58> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4779]>
Dec 05 15:30:08 node01 bash[7017]: debug    -57> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082]>
Dec 05 15:30:08 node01 bash[7017]: debug    -56> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -55> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micr>
Dec 05 15:30:08 node01 bash[7017]: debug    -54> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open>
Dec 05 15:30:08 node01 bash[7017]: debug    -53> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082]>
Dec 05 15:30:08 node01 bash[7017]: debug    -52> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -51> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micr>
Dec 05 15:30:08 node01 bash[7017]: debug    -50> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [file/delete_scheduler.c>
Dec 05 15:30:08 node01 bash[7017]: debug    -49> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open>
Dec 05 15:30:08 node01 bash[7017]: debug    -48> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: DB pointer 0x562461f90000
Dec 05 15:30:08 node01 bash[7017]: debug    -47> 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:9>
Dec 05 15:30:08 node01 bash[7017]: debug    -46> 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:9>
Dec 05 15:30:08 node01 bash[7017]: ** DB Stats **
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0>
Dec 05 15:30:08 node01 bash[7017]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.0>
Dec 05 15:30:08 node01 bash[7017]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: debug    -45> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -44> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -43> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -42> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -41> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -40> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -39> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -38> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -37> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -36> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -35> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -34> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -33> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -32> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -31> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -30> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -29> 2023-12-05T13:30:08.871+0000 7f00d16deb80  2 auth: KeyRing::load: loaded key f>
Dec 05 15:30:08 node01 bash[7017]: debug    -28> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 starting mon.node01 rank 0 at pub>
Dec 05 15:30:08 node01 bash[7017]: debug    -27> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -26> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -25> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -24> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -23> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -22> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -21> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -20> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -19> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -18> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -17> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -16> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -15> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -14> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -13> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -12> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -11> 2023-12-05T13:30:08.875+0000 7f00d16deb80  2 auth: KeyRing::load: loaded key f>
Dec 05 15:30:08 node01 bash[7017]: debug    -10> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 adding auth protocol: cephx
Dec 05 15:30:08 node01 bash[7017]: debug     -9> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 adding auth protocol: cephx
Dec 05 15:30:08 node01 bash[7017]: debug     -8> 2023-12-05T13:30:08.875+0000 7f00d16deb80 10 log_channel(cluster) update_confi>
Dec 05 15:30:08 node01 bash[7017]: debug     -7> 2023-12-05T13:30:08.875+0000 7f00d16deb80 10 log_channel(audit) update_config >
Dec 05 15:30:08 node01 bash[7017]: debug     -6> 2023-12-05T13:30:08.875+0000 7f00d16deb80  1 mon.node01@-1(???) e0 preinit fsi>
Dec 05 15:30:08 node01 bash[7017]: debug     -5> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 mon.node01@-1(???).mds e0 Unable >
Dec 05 15:30:08 node01 bash[7017]: debug     -4> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -3> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -2> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -1> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug      0> 2023-12-05T13:30:08.879+0000 7f00d16deb80 -1 *** Caught signal (Aborted) **
Dec 05 15:30:08 node01 bash[7017]:  in thread 7f00d16deb80 thread_name:ceph-mon
Dec 05 15:30:08 node01 bash[7017]:  ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Dec 05 15:30:08 node01 bash[7017]:  1: /lib64/libpthread.so.0(+0x12cf0) [0x7f00ceb4bcf0]
Dec 05 15:30:08 node01 bash[7017]:  2: gsignal()
Dec 05 15:30:08 node01 bash[7017]:  3: abort()
Dec 05 15:30:08 node01 bash[7017]:  4: /lib64/libstdc++.so.6(+0x9009b) [0x7f00ce15d09b]
Dec 05 15:30:08 node01 bash[7017]:  5: /lib64/libstdc++.so.6(+0x9654c) [0x7f00ce16354c]
Dec 05 15:30:08 node01 bash[7017]:  6: /lib64/libstdc++.so.6(+0x965a7) [0x7f00ce1635a7]
Dec 05 15:30:08 node01 bash[7017]:  7: /lib64/libstdc++.so.6(+0x96808) [0x7f00ce163808]
Dec 05 15:30:08 node01 bash[7017]:  8: /lib64/libstdc++.so.6(+0x91ff9) [0x7f00ce15eff9]
Dec 05 15:30:08 node01 bash[7017]:  9: (LogMonitor::log_external_backlog()+0xe70) [0x5624609dc190]
Dec 05 15:30:08 node01 bash[7017]:  10: (LogMonitor::update_from_paxos(bool*)+0x54) [0x5624609dc334]
Dec 05 15:30:08 node01 bash[7017]:  11: (Monitor::refresh_from_paxos(bool*)+0x104) [0x56246094fe74]
Dec 05 15:30:08 node01 bash[7017]:  12: (Monitor::preinit()+0x95d) [0x56246097e3ad]
Dec 05 15:30:08 node01 bash[7017]:  13: main()
Dec 05 15:30:08 node01 bash[7017]:  14: __libc_start_main()
Dec 05 15:30:08 node01 bash[7017]:  15: _start()
Dec 05 15:30:08 node01 bash[7017]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Dec 05 15:30:08 node01 bash[7017]: --- logging levels ---
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 none
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 lockdep
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 context
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 crush
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_balancer
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_locker
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_log
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_log_expire
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_migrator
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 buffer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 timer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 filer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 striper
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 objecter
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rados
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_mirror
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_replay
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_pwl
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 journaler
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 objectcacher
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 immutable_obj_cache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 client
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 osd
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 optracker
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 objclass
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 filestore
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 journal
Dec 05 15:30:08 node01 bash[7017]:    0/ 0 ms
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mon
Dec 05 15:30:08 node01 bash[7017]:    0/10 monc
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 paxos
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 tp
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 auth
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 crypto
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 finisher
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 reserver
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 heartbeatmap
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 perfcounter
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw_sync
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw_datacache
Dec 05 15:30:08 node01 bash[7017]:    1/10 civetweb
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 javaclient
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 asok
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 throttle
Dec 05 15:30:08 node01 bash[7017]:    0/ 0 refs
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 compressor
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 bluestore
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 bluefs
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 bdev
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 kstore
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 rocksdb
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 leveldb
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 memdb
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 fuse
Dec 05 15:30:08 node01 bash[7017]:    2/ 5 mgr
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mgrc
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 dpdk
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 eventtrace
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 prioritycache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 test
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 cephfs_mirror
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 cephsqlite
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_onode
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_odata
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_omap
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_tm
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_cleaner
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_lba
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_cache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_journal
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_device
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 alienstore
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mclock
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 ceph_exporter
Dec 05 15:30:08 node01 bash[7017]:   -2/-2 (syslog threshold)
Dec 05 15:30:08 node01 bash[7017]:   99/99 (stderr threshold)
Dec 05 15:30:08 node01 bash[7017]: --- pthread ID / name mapping for recent threads ---
Dec 05 15:30:08 node01 bash[7017]:   7f00c1930700 / ceph-mon
Dec 05 15:30:08 node01 bash[7017]:   7f00cab3d700 / admin_socket
Dec 05 15:30:08 node01 bash[7017]:   7f00d16deb80 / ceph-mon
Dec 05 15:30:08 node01 bash[7017]:   max_recent     10000
Dec 05 15:30:08 node01 bash[7017]:   max_new        10000
Dec 05 15:30:08 node01 bash[7017]:   log_file
Dec 05 15:30:08 node01 bash[7017]: --- end dump of recent events ---
Dec 05 15:30:08 node01 bash[7017]: --- begin dump of recent events ---
Dec 05 15:30:08 node01 bash[7017]: debug   -290> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -289> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -288> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -287> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -286> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -285> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -284> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -283> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -282> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -281> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -280> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -279> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -278> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -277> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -276> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -275> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -274> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -273> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -272> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -271> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -270> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -269> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -268> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -267> 2023-12-05T13:30:08.847+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -266> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 set uid:gid to 167:167 (ceph:ceph)
Dec 05 15:30:08 node01 bash[7017]: debug   -265> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 ceph version 17.2.7 (b12291d11004>
Dec 05 15:30:08 node01 bash[7017]: debug   -264> 2023-12-05T13:30:08.855+0000 7f00d16deb80  0 pidfile_write: ignore empty --pid>
Dec 05 15:30:08 node01 bash[7017]: debug   -263> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) init /var/ru>
Dec 05 15:30:08 node01 bash[7017]: debug   -262> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) bind_and_lis>
Dec 05 15:30:08 node01 bash[7017]: debug   -261> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -260> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -259> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -258> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -257> 2023-12-05T13:30:08.855+0000 7f00d16deb80  5 asok(0x56246214a000) register_com>
Dec 05 15:30:08 node01 bash[7017]: debug   -256> 2023-12-05T13:30:08.855+0000 7f00cab3d700  5 asok(0x56246214a000) entry start
Dec 05 15:30:08 node01 bash[7017]: debug   -255> 2023-12-05T13:30:08.859+0000 7f00d16deb80  0 load: jerasure load: lrc
Dec 05 15:30:08 node01 bash[7017]: debug   -254> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option level_compact>
Dec 05 15:30:08 node01 bash[7017]: debug   -253> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option write_buffer_>
Dec 05 15:30:08 node01 bash[7017]: debug   -252> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option compression =>
Dec 05 15:30:08 node01 bash[7017]: debug   -251> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option level_compact>
Dec 05 15:30:08 node01 bash[7017]: debug   -250> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option write_buffer_>
Dec 05 15:30:08 node01 bash[7017]: debug   -249> 2023-12-05T13:30:08.859+0000 7f00d16deb80  1  set rocksdb option compression =>
Dec 05 15:30:08 node01 bash[7017]: debug   -248> 2023-12-05T13:30:08.859+0000 7f00d16deb80  5 rocksdb: verify_sharding column f>
Dec 05 15:30:08 node01 bash[7017]: debug   -247> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: RocksDB version: 6.15.5
Dec 05 15:30:08 node01 bash[7017]: debug   -246> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Git sha rocksdb_build_gi>
Dec 05 15:30:08 node01 bash[7017]: debug   -245> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compile date Oct 25 2023
Dec 05 15:30:08 node01 bash[7017]: debug   -244> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB SUMMARY
Dec 05 15:30:08 node01 bash[7017]: debug   -243> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: DB Session ID:  I4Q3Q1EP>
Dec 05 15:30:08 node01 bash[7017]: debug   -242> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: CURRENT file:  CURRENT
Dec 05 15:30:08 node01 bash[7017]: debug   -241> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: IDENTITY file:  IDENTITY
Dec 05 15:30:08 node01 bash[7017]: debug   -240> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: MANIFEST file:  MANIFEST>
Dec 05 15:30:08 node01 bash[7017]: debug   -239> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: SST files in /var/lib/ce>
Dec 05 15:30:08 node01 bash[7017]: debug   -238> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Write Ahead Log file in >
Dec 05 15:30:08 node01 bash[7017]: debug   -237> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -236> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -235> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -234> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -233> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -232> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -231> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -230> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -229> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -228> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -227> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -226> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -225> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -224> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -223> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -222> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -221> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -220> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -219> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -218> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -217> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.create_>
Dec 05 15:30:08 node01 bash[7017]: debug   -216> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -215> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -214> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.t>
Dec 05 15:30:08 node01 bash[7017]: debug   -213> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -212> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -211> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug   -210> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.mani>
Dec 05 15:30:08 node01 bash[7017]: debug   -209> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                     Opti>
Dec 05 15:30:08 node01 bash[7017]: debug   -208> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -207> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -206> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug   -205> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.access_h>
Dec 05 15:30:08 node01 bash[7017]: debug   -204> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.new_table_reade>
Dec 05 15:30:08 node01 bash[7017]: debug   -203> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.random>
Dec 05 15:30:08 node01 bash[7017]: debug   -202> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Opt>
Dec 05 15:30:08 node01 bash[7017]: debug   -201> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -200> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.sst_file_man>
Dec 05 15:30:08 node01 bash[7017]: debug   -199> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug   -198> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -197> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -196> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -195> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.allow_co>
Dec 05 15:30:08 node01 bash[7017]: debug   -194> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.enable_writ>
Dec 05 15:30:08 node01 bash[7017]: debug   -193> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.writ>
Dec 05 15:30:08 node01 bash[7017]: debug   -192> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.write>
Dec 05 15:30:08 node01 bash[7017]: debug   -191> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -190> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -189> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -188> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allo>
Dec 05 15:30:08 node01 bash[7017]: debug   -187> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.pres>
Dec 05 15:30:08 node01 bash[7017]: debug   -186> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.two_>
Dec 05 15:30:08 node01 bash[7017]: debug   -185> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.manu>
Dec 05 15:30:08 node01 bash[7017]: debug   -184> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.atom>
Dec 05 15:30:08 node01 bash[7017]: debug   -183> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -182> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -181> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -180> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -179> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -178> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -177> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -176> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.bgerr>
Dec 05 15:30:08 node01 bash[7017]: debug   -175> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.allo>
Dec 05 15:30:08 node01 bash[7017]: debug   -174> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.db_h>
Dec 05 15:30:08 node01 bash[7017]: debug   -173> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -172> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -171> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -170> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.avoi>
Dec 05 15:30:08 node01 bash[7017]: debug   -169> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.writab>
Dec 05 15:30:08 node01 bash[7017]: debug   -168> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.dela>
Dec 05 15:30:08 node01 bash[7017]: debug   -167> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.max_>
Dec 05 15:30:08 node01 bash[7017]: debug   -166> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.dele>
Dec 05 15:30:08 node01 bash[7017]: debug   -165> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -164> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -163> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug   -162> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -161> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug   -160> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                      Opt>
Dec 05 15:30:08 node01 bash[7017]: debug   -159> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -158> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.compaction>
Dec 05 15:30:08 node01 bash[7017]: debug   -157> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -156> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Compression algorithms s>
Dec 05 15:30:08 node01 bash[7017]: debug   -155> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTDNotFinalCom>
Dec 05 15:30:08 node01 bash[7017]: debug   -154> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZSTD supported:>
Dec 05 15:30:08 node01 bash[7017]: debug   -153> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kXpressCompressi>
Dec 05 15:30:08 node01 bash[7017]: debug   -152> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4HCCompressio>
Dec 05 15:30:08 node01 bash[7017]: debug   -151> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kZlibCompression>
Dec 05 15:30:08 node01 bash[7017]: debug   -150> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kSnappyCompressi>
Dec 05 15:30:08 node01 bash[7017]: debug   -149> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kLZ4Compression >
Dec 05 15:30:08 node01 bash[7017]: debug   -148> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         kBZip2Compressio>
Dec 05 15:30:08 node01 bash[7017]: debug   -147> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Fast CRC32 supported: Su>
Dec 05 15:30:08 node01 bash[7017]: debug   -146> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4724]>
Dec 05 15:30:08 node01 bash[7017]: debug   -145> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: [db/column_family.cc:595>
Dec 05 15:30:08 node01 bash[7017]: debug   -144> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.co>
Dec 05 15:30:08 node01 bash[7017]: debug   -143> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:           Options.merge_>
Dec 05 15:30:08 node01 bash[7017]: debug   -142> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compactio>
Dec 05 15:30:08 node01 bash[7017]: debug   -141> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.compactio>
Dec 05 15:30:08 node01 bash[7017]: debug   -140> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.sst_partitioner>
Dec 05 15:30:08 node01 bash[7017]: debug   -139> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.memtable>
Dec 05 15:30:08 node01 bash[7017]: debug   -138> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.table>
Dec 05 15:30:08 node01 bash[7017]: debug   -137> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            table_factory>
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks: 1
Dec 05 15:30:08 node01 bash[7017]:   cache_index_and_filter_blocks_with_high_priority: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_l0_filter_and_index_blocks_in_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   pin_top_level_index_and_filter: 1
Dec 05 15:30:08 node01 bash[7017]:   index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   data_block_index_type: 0
Dec 05 15:30:08 node01 bash[7017]:   index_shortening: 1
Dec 05 15:30:08 node01 bash[7017]:   data_block_hash_table_util_ratio: 0.750000
Dec 05 15:30:08 node01 bash[7017]:   hash_index_allow_collision: 1
Dec 05 15:30:08 node01 bash[7017]:   checksum: 1
Dec 05 15:30:08 node01 bash[7017]:   no_block_cache: 0
Dec 05 15:30:08 node01 bash[7017]:   block_cache: 0x562461f27090
Dec 05 15:30:08 node01 bash[7017]:   block_cache_name: BinnedLRUCache
Dec 05 15:30:08 node01 bash[7017]:   block_cache_options:
Dec 05 15:30:08 node01 bash[7017]:     capacity : 536870912
Dec 05 15:30:08 node01 bash[7017]:     num_shard_bits : 4
Dec 05 15:30:08 node01 bash[7017]:     strict_capacity_limit : 0
Dec 05 15:30:08 node01 bash[7017]:     high_pri_pool_ratio: 0.000
Dec 05 15:30:08 node01 bash[7017]:   block_cache_compressed: (nil)
Dec 05 15:30:08 node01 bash[7017]:   persistent_cache: (nil)
Dec 05 15:30:08 node01 bash[7017]:   block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   block_size_deviation: 10
Dec 05 15:30:08 node01 bash[7017]:   block_restart_interval: 16
Dec 05 15:30:08 node01 bash[7017]:   index_block_restart_interval: 1
Dec 05 15:30:08 node01 bash[7017]:   metadata_block_size: 4096
Dec 05 15:30:08 node01 bash[7017]:   partition_filters: 0
Dec 05 15:30:08 node01 bash[7017]:   use_delta_encoding: 1
Dec 05 15:30:08 node01 bash[7017]:   filter_policy: rocksdb.BuiltinBloomFilter
Dec 05 15:30:08 node01 bash[7017]:   whole_key_filtering: 1
Dec 05 15:30:08 node01 bash[7017]:   verify_compression: 0
Dec 05 15:30:08 node01 bash[7017]:   read_amp_bytes_per_bit: 0
Dec 05 15:30:08 node01 bash[7017]:   format_version: 4
Dec 05 15:30:08 node01 bash[7017]:   enable_index_compression: 1
Dec 05 15:30:08 node01 bash[7017]:   block_align: 0
Dec 05 15:30:08 node01 bash[7017]: debug   -136> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.write_buf>
Dec 05 15:30:08 node01 bash[7017]: debug   -135> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:  Options.max_write_buffe>
Dec 05 15:30:08 node01 bash[7017]: debug   -134> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.compres>
Dec 05 15:30:08 node01 bash[7017]: debug   -133> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -132> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.prefix_ext>
Dec 05 15:30:08 node01 bash[7017]: debug   -131> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_inser>
Dec 05 15:30:08 node01 bash[7017]: debug   -130> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.num_>
Dec 05 15:30:08 node01 bash[7017]: debug   -129> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:        Options.min_write>
Dec 05 15:30:08 node01 bash[7017]: debug   -128> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_bu>
Dec 05 15:30:08 node01 bash[7017]: debug   -127> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:     Options.max_write_bu>
Dec 05 15:30:08 node01 bash[7017]: debug   -126> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.botto>
Dec 05 15:30:08 node01 bash[7017]: debug   -125> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -124> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.bo>
Dec 05 15:30:08 node01 bash[7017]: debug   -123> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -122> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -121> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.bottommo>
Dec 05 15:30:08 node01 bash[7017]: debug   -120> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -119> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:            Options.compr>
Dec 05 15:30:08 node01 bash[7017]: debug   -118> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -117> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.co>
Dec 05 15:30:08 node01 bash[7017]: debug   -116> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -115> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -114> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:         Options.compress>
Dec 05 15:30:08 node01 bash[7017]: debug   -113> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                  Options>
Dec 05 15:30:08 node01 bash[7017]: debug   -112> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:      Options.level0_file>
Dec 05 15:30:08 node01 bash[7017]: debug   -111> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.level0_>
Dec 05 15:30:08 node01 bash[7017]: debug   -110> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:              Options.lev>
Dec 05 15:30:08 node01 bash[7017]: debug   -109> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug   -108> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:             Options.targ>
Dec 05 15:30:08 node01 bash[7017]: debug   -107> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.m>
Dec 05 15:30:08 node01 bash[7017]: debug   -106> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.level_compaction>
Dec 05 15:30:08 node01 bash[7017]: debug   -105> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.max_byt>
Dec 05 15:30:08 node01 bash[7017]: debug   -104> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -103> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -102> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -101> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug   -100> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -99> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -98> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.max_bytes_for_le>
Dec 05 15:30:08 node01 bash[7017]: debug    -97> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.max_sequen>
Dec 05 15:30:08 node01 bash[7017]: debug    -96> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -95> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -94> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.soft_pending_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -93> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.hard_pending_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -92> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.rate_limit>
Dec 05 15:30:08 node01 bash[7017]: debug    -91> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.d>
Dec 05 15:30:08 node01 bash[7017]: debug    -90> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -89> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -88> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -87> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -86> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -85> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -84> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -83> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -82> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -81> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb: Options.compaction_optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -80> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug    -79> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                   Option>
Dec 05 15:30:08 node01 bash[7017]: debug    -78> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                 Options.>
Dec 05 15:30:08 node01 bash[7017]: debug    -77> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.me>
Dec 05 15:30:08 node01 bash[7017]: debug    -76> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:               Options.me>
Dec 05 15:30:08 node01 bash[7017]: debug    -75> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.memtable_huge_>
Dec 05 15:30:08 node01 bash[7017]: debug    -74> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -73> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -72> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.o>
Dec 05 15:30:08 node01 bash[7017]: debug    -71> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.p>
Dec 05 15:30:08 node01 bash[7017]: debug    -70> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.f>
Dec 05 15:30:08 node01 bash[7017]: debug    -69> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.r>
Dec 05 15:30:08 node01 bash[7017]: debug    -68> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                         >
Dec 05 15:30:08 node01 bash[7017]: debug    -67> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:          Options.periodi>
Dec 05 15:30:08 node01 bash[7017]: debug    -66> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                    Optio>
Dec 05 15:30:08 node01 bash[7017]: debug    -65> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                        O>
Dec 05 15:30:08 node01 bash[7017]: debug    -64> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                       Op>
Dec 05 15:30:08 node01 bash[7017]: debug    -63> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:                Options.b>
Dec 05 15:30:08 node01 bash[7017]: debug    -62> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:       Options.enable_blo>
Dec 05 15:30:08 node01 bash[7017]: debug    -61> 2023-12-05T13:30:08.859+0000 7f00d16deb80  4 rocksdb:   Options.blob_garbage_c>
Dec 05 15:30:08 node01 bash[7017]: debug    -60> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -59> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4764]>
Dec 05 15:30:08 node01 bash[7017]: debug    -58> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4779]>
Dec 05 15:30:08 node01 bash[7017]: debug    -57> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082]>
Dec 05 15:30:08 node01 bash[7017]: debug    -56> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -55> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micr>
Dec 05 15:30:08 node01 bash[7017]: debug    -54> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open>
Dec 05 15:30:08 node01 bash[7017]: debug    -53> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:4082]>
Dec 05 15:30:08 node01 bash[7017]: debug    -52> 2023-12-05T13:30:08.863+0000 7f00d16deb80  4 rocksdb: [db/version_set.cc:3457]>
Dec 05 15:30:08 node01 bash[7017]: debug    -51> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: EVENT_LOG_v1 {"time_micr>
Dec 05 15:30:08 node01 bash[7017]: debug    -50> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [file/delete_scheduler.c>
Dec 05 15:30:08 node01 bash[7017]: debug    -49> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: [db/db_impl/db_impl_open>
Dec 05 15:30:08 node01 bash[7017]: debug    -48> 2023-12-05T13:30:08.867+0000 7f00d16deb80  4 rocksdb: DB pointer 0x562461f90000
Dec 05 15:30:08 node01 bash[7017]: debug    -47> 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:9>
Dec 05 15:30:08 node01 bash[7017]: debug    -46> 2023-12-05T13:30:08.867+0000 7f00c1930700  4 rocksdb: [db/db_impl/db_impl.cc:9>
Dec 05 15:30:08 node01 bash[7017]: ** DB Stats **
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0>
Dec 05 15:30:08 node01 bash[7017]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.0>
Dec 05 15:30:08 node01 bash[7017]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Dec 05 15:30:08 node01 bash[7017]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp>
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]:   L0      2/0    7.21 MB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:   L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Sum      3/0   14.43 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]:  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0 >
Dec 05 15:30:08 node01 bash[7017]: ** Compaction Stats [default] **
Dec 05 15:30:08 node01 bash[7017]: Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W->
Dec 05 15:30:08 node01 bash[7017]: -------------------------------------------------------------------------------------------->
Dec 05 15:30:08 node01 bash[7017]: Uptime(secs): 0.0 total, 0.0 interval
Dec 05 15:30:08 node01 bash[7017]: Flush(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(GB): cumulative 0.000, interval 0.000
Dec 05 15:30:08 node01 bash[7017]: AddFile(Total Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(L0 Files): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: AddFile(Keys): cumulative 0, interval 0
Dec 05 15:30:08 node01 bash[7017]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 sec>
Dec 05 15:30:08 node01 bash[7017]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 secon>
Dec 05 15:30:08 node01 bash[7017]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 le>
Dec 05 15:30:08 node01 bash[7017]: ** File Read Latency Histogram By Level [default] **
Dec 05 15:30:08 node01 bash[7017]: debug    -45> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -44> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -43> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -42> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -41> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -40> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -39> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -38> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -37> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -36> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -35> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -34> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -33> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -32> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -31> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -30> 2023-12-05T13:30:08.871+0000 7f00d16deb80  5 AuthRegistry(0x562462c04140) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -29> 2023-12-05T13:30:08.871+0000 7f00d16deb80  2 auth: KeyRing::load: loaded key f>
Dec 05 15:30:08 node01 bash[7017]: debug    -28> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 starting mon.node01 rank 0 at pub>
Dec 05 15:30:08 node01 bash[7017]: debug    -27> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -26> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -25> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -24> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -23> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -22> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -21> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -20> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -19> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -18> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -17> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -16> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -15> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -14> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -13> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -12> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 AuthRegistry(0x562462c04a40) addi>
Dec 05 15:30:08 node01 bash[7017]: debug    -11> 2023-12-05T13:30:08.875+0000 7f00d16deb80  2 auth: KeyRing::load: loaded key f>
Dec 05 15:30:08 node01 bash[7017]: debug    -10> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 adding auth protocol: cephx
Dec 05 15:30:08 node01 bash[7017]: debug     -9> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 adding auth protocol: cephx
Dec 05 15:30:08 node01 bash[7017]: debug     -8> 2023-12-05T13:30:08.875+0000 7f00d16deb80 10 log_channel(cluster) update_confi>
Dec 05 15:30:08 node01 bash[7017]: debug     -7> 2023-12-05T13:30:08.875+0000 7f00d16deb80 10 log_channel(audit) update_config >
Dec 05 15:30:08 node01 bash[7017]: debug     -6> 2023-12-05T13:30:08.875+0000 7f00d16deb80  1 mon.node01@-1(???) e0 preinit fsi>
Dec 05 15:30:08 node01 bash[7017]: debug     -5> 2023-12-05T13:30:08.875+0000 7f00d16deb80  5 mon.node01@-1(???).mds e0 Unable >
Dec 05 15:30:08 node01 bash[7017]: debug     -4> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -3> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -2> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug     -1> 2023-12-05T13:30:08.875+0000 7f00d16deb80  0 mon.node01@-1(???).osd e9898 crus>
Dec 05 15:30:08 node01 bash[7017]: debug      0> 2023-12-05T13:30:08.879+0000 7f00d16deb80 -1 *** Caught signal (Aborted) **
Dec 05 15:30:08 node01 bash[7017]:  in thread 7f00d16deb80 thread_name:ceph-mon
Dec 05 15:30:08 node01 bash[7017]:  ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Dec 05 15:30:08 node01 bash[7017]:  1: /lib64/libpthread.so.0(+0x12cf0) [0x7f00ceb4bcf0]
Dec 05 15:30:08 node01 bash[7017]:  2: gsignal()
Dec 05 15:30:08 node01 bash[7017]:  3: abort()
Dec 05 15:30:08 node01 bash[7017]:  4: /lib64/libstdc++.so.6(+0x9009b) [0x7f00ce15d09b]
Dec 05 15:30:08 node01 bash[7017]:  5: /lib64/libstdc++.so.6(+0x9654c) [0x7f00ce16354c]
Dec 05 15:30:08 node01 bash[7017]:  6: /lib64/libstdc++.so.6(+0x965a7) [0x7f00ce1635a7]
Dec 05 15:30:08 node01 bash[7017]:  7: /lib64/libstdc++.so.6(+0x96808) [0x7f00ce163808]
Dec 05 15:30:08 node01 bash[7017]:  8: /lib64/libstdc++.so.6(+0x91ff9) [0x7f00ce15eff9]
Dec 05 15:30:08 node01 bash[7017]:  9: (LogMonitor::log_external_backlog()+0xe70) [0x5624609dc190]
Dec 05 15:30:08 node01 bash[7017]:  10: (LogMonitor::update_from_paxos(bool*)+0x54) [0x5624609dc334]
Dec 05 15:30:08 node01 bash[7017]:  11: (Monitor::refresh_from_paxos(bool*)+0x104) [0x56246094fe74]
Dec 05 15:30:08 node01 bash[7017]:  12: (Monitor::preinit()+0x95d) [0x56246097e3ad]
Dec 05 15:30:08 node01 bash[7017]:  13: main()
Dec 05 15:30:08 node01 bash[7017]:  14: __libc_start_main()
Dec 05 15:30:08 node01 bash[7017]:  15: _start()
Dec 05 15:30:08 node01 bash[7017]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Dec 05 15:30:08 node01 bash[7017]: --- logging levels ---
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 none
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 lockdep
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 context
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 crush
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_balancer
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_locker
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_log
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_log_expire
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mds_migrator
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 buffer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 timer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 filer
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 striper
Dec 05 15:30:08 node01 bash[7017]:    0/ 1 objecter
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rados
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_mirror
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_replay
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 rbd_pwl
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 journaler
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 objectcacher
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 immutable_obj_cache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 client
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 osd
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 optracker
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 objclass
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 filestore
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 journal
Dec 05 15:30:08 node01 bash[7017]:    0/ 0 ms
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mon
Dec 05 15:30:08 node01 bash[7017]:    0/10 monc
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 paxos
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 tp
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 auth
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 crypto
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 finisher
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 reserver
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 heartbeatmap
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 perfcounter
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw_sync
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 rgw_datacache
Dec 05 15:30:08 node01 bash[7017]:    1/10 civetweb
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 javaclient
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 asok
Dec 05 15:30:08 node01 bash[7017]:    1/ 1 throttle
Dec 05 15:30:08 node01 bash[7017]:    0/ 0 refs
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 compressor
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 bluestore
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 bluefs
Dec 05 15:30:08 node01 bash[7017]:    1/ 3 bdev
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 kstore
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 rocksdb
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 leveldb
Dec 05 15:30:08 node01 bash[7017]:    4/ 5 memdb
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 fuse
Dec 05 15:30:08 node01 bash[7017]:    2/ 5 mgr
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mgrc
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 dpdk
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 eventtrace
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 prioritycache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 test
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 cephfs_mirror
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 cephsqlite
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_onode
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_odata
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_omap
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_tm
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_cleaner
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_lba
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_cache
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_journal
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 seastore_device
Dec 05 15:30:08 node01 bash[7017]:    0/ 5 alienstore
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 mclock
Dec 05 15:30:08 node01 bash[7017]:    1/ 5 ceph_exporter
Dec 05 15:30:08 node01 bash[7017]:   -2/-2 (syslog threshold)
Dec 05 15:30:08 node01 bash[7017]:   99/99 (stderr threshold)
Dec 05 15:30:08 node01 bash[7017]: --- pthread ID / name mapping for recent threads ---
Dec 05 15:30:08 node01 bash[7017]:   7f00c1930700 / ceph-mon
Dec 05 15:30:08 node01 bash[7017]:   7f00cab3d700 / admin_socket
Dec 05 15:30:08 node01 bash[7017]:   7f00d16deb80 / ceph-mon
Dec 05 15:30:08 node01 bash[7017]:   max_recent     10000
Dec 05 15:30:08 node01 bash[7017]:   max_new        10000
Dec 05 15:30:08 node01 bash[7017]:   log_file /var/lib/ceph/crash/2023-12-05T13:30:08.884076Z_5313f23c-b228-4405-abb0-6cb997013>
Dec 05 15:30:08 node01 bash[7017]: --- end dump of recent events ---
Dec 05 15:30:08 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Main process exited, code=exit>
Dec 05 15:30:09 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Failed with result 'exit-code'.
Dec 05 15:30:19 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Scheduled restart job, restart>
Dec 05 15:30:19 node01 systemd[1]: Stopped Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
Dec 05 15:30:19 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Start request repeated too qui>
Dec 05 15:30:19 node01 systemd[1]: ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mon.node01.service: Failed with result 'exit-code'.
Dec 05 15:30:19 node01 systemd[1]: Failed to start Ceph mon.node01 for be4304e4-b0d5-11ec-8c6a-2965d4229f37.

I have added a disk (removed the faulty one from the server and re-added a new one with the same capacity) on node01 and the output of "ceph osd tree" is the same:

ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         2.05046  root default
-3         0.68349      host node01
 0    hdd  0.14650          osd.0        up   1.00000  1.00000
 4    hdd  0.04880          osd.4        up   1.00000  1.00000
 8    hdd  0.04880          osd.8        up   1.00000  1.00000
10    hdd  0.04880          osd.10       up   1.00000  1.00000
14    hdd  0.39059          osd.14      DNE         0
-5         0.68349      host node02
 2    hdd  0.14650          osd.2        up   1.00000  1.00000
 5    hdd  0.04880          osd.5        up   1.00000  1.00000
 7    hdd  0.04880          osd.7        up   1.00000  1.00000
 9    hdd  0.04880          osd.9        up   1.00000  1.00000
12    hdd  0.39059          osd.12       up   1.00000  1.00000
-7         0.68349      host node03
 1    hdd  0.14650          osd.1        up   1.00000  1.00000
 3    hdd  0.04880          osd.3        up   1.00000  1.00000
 6    hdd  0.04880          osd.6        up   1.00000  1.00000
11    hdd  0.04880          osd.11       up   1.00000  1.00000
13    hdd  0.39059          osd.13       up   1.00000  1.00000

How can I remove osd.14 completely ?
If I run ceph osd rm 14 I am getting:

osd.14 does not exist.


Thanks,

Manolis Daramas


-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: Tuesday, December 5, 2023 12:02 PM
To: Manolis Daramas <mdaramas@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Re: After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

The backfill_toofull OSDs could be the reason why the MDS won't become
active, not sure though, it could also be the unfound object.
I would try to get the third MON online, probably with an empty MON
store. Or do you have any specific error messages why it won't start?
Add the relevant output from:

journalctl -u ceph-{FSID}@mon.node01

Is the osd.14 healthy? I mean the disk itself, not sure if cou can get
it back into the cluster right now. But since it's the largest OSD on
that host it explains why the others are backfill_toofull. Any chance
you can add another disk to node01?

Zitat von Manolis Daramas <mdaramas@xxxxxxxxxxxx>:

> Hi Eugen,
>
> $ sudo ceph osd tree (output below):
>
> ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
> -1         2.05046  root default
> -3         0.68349      host node01
>  0    hdd  0.14650          osd.0        up   1.00000  1.00000
>  4    hdd  0.04880          osd.4        up   1.00000  1.00000
>  8    hdd  0.04880          osd.8        up   1.00000  1.00000
> 10    hdd  0.04880          osd.10       up   1.00000  1.00000
> 14    hdd  0.39059          osd.14      DNE         0
> -5         0.68349      host node02
>  2    hdd  0.14650          osd.2        up   1.00000  1.00000
>  5    hdd  0.04880          osd.5        up   1.00000  1.00000
>  7    hdd  0.04880          osd.7        up   1.00000  1.00000
>  9    hdd  0.04880          osd.9        up   1.00000  1.00000
> 12    hdd  0.39059          osd.12       up   1.00000  1.00000
> -7         0.68349      host node03
>  1    hdd  0.14650          osd.1        up   1.00000  1.00000
>  3    hdd  0.04880          osd.3        up   1.00000  1.00000
>  6    hdd  0.04880          osd.6        up   1.00000  1.00000
> 11    hdd  0.04880          osd.11       up   1.00000  1.00000
> 13    hdd  0.39059          osd.13       up   1.00000  1.00000
>
> Also, the output on manager node below:
>
> 2023-12-05T10:03:38.559+0200 7fb3fde06700 -1 auth: unable to find a
> keyring on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or
> directory
>
> 2023-12-05T10:03:38.559+0200 7fb3fde06700 -1
> AuthRegistry(0x7fb3f8064310) no keyring found at
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling
> cephx
>
> 2023-12-05T10:03:38.559+0200 7fb3fde06700 -1 auth: unable to find a
> keyring on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or
> directory
>
> 2023-12-05T10:03:38.559+0200 7fb3fde06700 -1
> AuthRegistry(0x7fb3fde04fe0) no keyring found at
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling
> cephx
>
> 2023-12-05T10:03:38.559+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:38.559+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:41.560+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:41.560+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:44.560+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:44.560+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:47.560+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:47.560+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:50.564+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:50.564+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:53.560+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:53.564+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:56.564+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:56.564+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:59.564+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:03:59.564+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:04:02.564+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:04:02.564+0200 7fb3fd605700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:04:05.564+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:04:05.564+0200 7fb3f7fff700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> 2023-12-05T10:04:08.564+0200 7fb3fce04700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support
> [1]
>
> It stucks after running "ceph mgr fail" command with all the above messages.
>
>
> The mds daemon show the below when issuing "systemctl status
> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node01.cjrvjc.service"
> (node01)
>
>
> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node01.cjrvjc.service
> - Ceph mds.storage.node01.cjrvjc for
> be4304e4-b0d5-11ec-8c6a-2965d4229f37
>      Loaded: loaded
> (/etc/systemd/system/ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@.service;
> enabled; vendor preset: enabled)
>      Active: active (running) since Tue 2023-12-05 10:16:41 EET; 7s ago
>    Main PID: 632331 (bash)
>       Tasks: 10 (limit: 72186)
>      Memory: 10.5M
>      CGroup:
> /system.slice/system-ceph\x2dbe4304e4\x2db0d5\x2d11ec\x2d8c6a\x2d2965d4229f37.slice/ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node01.cjrvjc.service
>              ├─632331 /bin/bash
> /var/lib/ceph/be4304e4-b0d5-11ec-8c6a-2965d4229f37/mds.storage.node01.cjrvjc/unit.run
>              └─632356 /usr/bin/docker run --rm --ipc=host
> --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host
> --entrypoint /usr/bin/ceph-mds --init --name ceph-be4304e4-b0d5-11ec>
>
> Dec 05 10:16:41 node01 systemd[1]: Started Ceph
> mds.storage.node01.cjrvjc for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.166+0000 7fb7e5585ac0  0 set uid:gid to 167:167
> (ceph:ceph)
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.166+0000 7fb7e5585ac0  0 ceph version 17.2.7
> (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable), process
> ceph-md>
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.166+0000 7fb7e5585ac0  1 main not setting numa
> affinity
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.166+0000 7fb7e5585ac0  0 pidfile_write: ignore
> empty --pid-file
> Dec 05 10:16:42 node01 bash[632356]: starting mds.storage.node01.cjrvjc at
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.174+0000 7fb7db80c700  1
> mds.storage.node01.cjrvjc Updating MDS map to version 6 from mon.2
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.422+0000 7fb7db80c700  1
> mds.storage.node01.cjrvjc Updating MDS map to version 7 from mon.2
> Dec 05 10:16:42 node01 bash[632356]: debug
> 2023-12-05T08:16:42.422+0000 7fb7db80c700  1
> mds.storage.node01.cjrvjc Monitors have assigned me to become a
> standby.
>
> The mds daemon show the below when issuing "systemctl status
> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node02.lyudbp.service"
> (node02)
>
>
> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node02.lyudbp.service
> - Ceph mds.storage.node02.lyudbp for
> be4304e4-b0d5-11ec-8c6a-2965d4229f37
>      Loaded: loaded
> (/etc/systemd/system/ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@.service;
> enabled; vendor preset: enabled)
>      Active: active (running) since Tue 2023-12-05 10:17:21 EET; 1s ago
>    Main PID: 612499 (bash)
>       Tasks: 10 (limit: 72186)
>      Memory: 10.5M
>      CGroup:
> /system.slice/system-ceph\x2dbe4304e4\x2db0d5\x2d11ec\x2d8c6a\x2d2965d4229f37.slice/ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37@mds.storage.node02.lyudbp.service
>              ├─612499 /bin/bash
> /var/lib/ceph/be4304e4-b0d5-11ec-8c6a-2965d4229f37/mds.storage.node02.lyudbp/unit.run
>              └─612517 /usr/bin/docker run --rm --ipc=host
> --stop-signal=SIGTERM --ulimit nofile=1048576 --net=host
> --entrypoint /usr/bin/ceph-mds --init --name ceph-be4304e4-b0d5-11ec>
>
> Dec 05 10:17:21 node02 systemd[1]: Started Ceph
> mds.storage.node02.lyudbp for be4304e4-b0d5-11ec-8c6a-2965d4229f37.
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.181+0000 7fd6ec9f4ac0  0 set uid:gid to 167:167
> (ceph:ceph)
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.181+0000 7fd6ec9f4ac0  0 ceph version 17.2.7
> (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable), process
> ceph-md>
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.181+0000 7fd6ec9f4ac0  1 main not setting numa
> affinity
> Dec 05 10:17:22 node02 bash[612517]: starting mds.storage.node02.lyudbp at
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.181+0000 7fd6ec9f4ac0  0 pidfile_write: ignore
> empty --pid-file
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.189+0000 7fd6e2c7b700  1
> mds.storage.node02.lyudbp Updating MDS map to version 8 from mon.1
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.405+0000 7fd6e2c7b700  1
> mds.storage.node02.lyudbp Updating MDS map to version 9 from mon.1
> Dec 05 10:17:22 node02 bash[612517]: debug
> 2023-12-05T08:17:22.405+0000 7fd6e2c7b700  1
> mds.storage.node02.lyudbp Monitors have assigned me to become a
> standby.
>
> I also add below the output of "ceph health detail"
>
> HEALTH_ERR 20 stray daemon(s) not managed by cephadm; 3 stray
> host(s) with 20 daemon(s) not managed by cephadm; 1/3 mons down,
> quorum node02,node03; 1/523510 objects unfound (0.000%); 3 nearfull
> osd(s); 1 osds exist in the crush map but not in the osdmap; Low
> space hindering backfill (add storage if this doesn't resolve
> itself): 20 pgs backfill_toofull; Possible data damage: 1 pg
> recovery_unfound; Degraded data redundancy: 74666/1570530 objects
> degraded (4.754%), 21 pgs degraded, 21 pgs undersized; 21 pgs not
> deep-scrubbed in time; 21 pgs not scrubbed in time; 3 pool(s) nearfull
> [WRN] CEPHADM_STRAY_DAEMON: 20 stray daemon(s) not managed by cephadm
>     stray daemon mds.storage.node01.cjrvjc on host node01 not
> managed by cephadm
>     stray daemon mgr.node01.xlciyx on host node01 not managed by cephadm
>     stray daemon osd.0 on host node01 not managed by cephadm
>     stray daemon osd.10 on host node01 not managed by cephadm
>     stray daemon osd.4 on host node01 not managed by cephadm
>     stray daemon osd.8 on host node01 not managed by cephadm
>     stray daemon mds.storage.node02.lyudbp on host node02 not
> managed by cephadm
>     stray daemon mgr.node02.gudauu on host node02 not managed by cephadm
>     stray daemon mon.node02 on host node02 not managed by cephadm
>     stray daemon osd.12 on host node02 not managed by cephadm
>     stray daemon osd.2 on host node02 not managed by cephadm
>     stray daemon osd.5 on host node02 not managed by cephadm
>     stray daemon osd.7 on host node02 not managed by cephadm
>     stray daemon osd.9 on host node02 not managed by cephadm
>     stray daemon mon.node03 on host node03 not managed by cephadm
>     stray daemon osd.1 on host node03 not managed by cephadm
>     stray daemon osd.11 on host node03 not managed by cephadm
>     stray daemon osd.13 on host node03 not managed by cephadm
>     stray daemon osd.3 on host node03 not managed by cephadm
>     stray daemon osd.6 on host node03 not managed by cephadm
> [WRN] CEPHADM_STRAY_HOST: 3 stray host(s) with 20 daemon(s) not
> managed by cephadm
>     stray host node01 has 6 stray daemons:
> ['mds.storage.node01.cjrvjc', 'mgr.node01.xlciyx', 'osd.0',
> 'osd.10', 'osd.4', 'osd.8']
>     stray host node02 has 8 stray daemons:
> ['mds.storage.node02.lyudbp', 'mgr.node02.gudauu', 'mon.node02',
> 'osd.12', 'osd.2', 'osd.5', 'osd.7', 'osd.9']
>     stray host node03 has 6 stray daemons: ['mon.node03', 'osd.1',
> 'osd.11', 'osd.13', 'osd.3', 'osd.6']
> [WRN] MON_DOWN: 1/3 mons down, quorum node02,node03
>     mon.node01 (rank 0) addr
> [v2:10.40.99.11:3300/0,v1:10.40.99.11:6789/0] is down (out of quorum)
> [WRN] OBJECT_UNFOUND: 1/523510 objects unfound (0.000%)
>     pg 2.2 has 1 unfound objects
> [WRN] OSD_NEARFULL: 3 nearfull osd(s)
>     osd.0 is near full
>     osd.8 is near full
>     osd.10 is near full
> [WRN] OSD_ORPHAN: 1 osds exist in the crush map but not in the osdmap
>     osd.14 exists in crush map but not in osdmap
> [WRN] PG_BACKFILL_FULL: Low space hindering backfill (add storage if
> this doesn't resolve itself): 20 pgs backfill_toofull
>     pg 3.2 is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,12]
>     pg 3.c is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,1]
>     pg 3.12 is active+undersized+degraded+remapped+backfill_toofull,
> acting [7,11]
>     pg 3.17 is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,12]
>     pg 3.27 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,1]
>     pg 3.2a is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.31 is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,9]
>     pg 3.34 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,6]
>     pg 3.35 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.39 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,1]
>     pg 3.3b is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,7]
>     pg 3.49 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.4a is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,2]
>     pg 3.53 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.56 is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,1]
>     pg 3.57 is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,2]
>     pg 3.5d is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.6c is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,12]
>     pg 3.6d is active+undersized+degraded+remapped+backfill_toofull,
> acting [12,13]
>     pg 3.75 is active+undersized+degraded+remapped+backfill_toofull,
> acting [13,5]
> [ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound
>     pg 2.2 is active+recovery_unfound+undersized+degraded+remapped,
> acting [5,13], 1 unfound
> [WRN] PG_DEGRADED: Degraded data redundancy: 74666/1570530 objects
> degraded (4.754%), 21 pgs degraded, 21 pgs undersized
>     pg 2.2 is stuck undersized for 2w, current state
> active+recovery_unfound+undersized+degraded+remapped, last acting
> [5,13]
>     pg 3.2 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,12]
>     pg 3.c is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,1]
>     pg 3.12 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [7,11]
>     pg 3.17 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,12]
>     pg 3.27 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,1]
>     pg 3.2a is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.31 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,9]
>     pg 3.34 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,6]
>     pg 3.35 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.39 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,1]
>     pg 3.3b is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,7]
>     pg 3.49 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.4a is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,2]
>     pg 3.53 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.56 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,1]
>     pg 3.57 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,2]
>     pg 3.5d is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.6c is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,12]
>     pg 3.6d is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [12,13]
>     pg 3.75 is stuck undersized for 2w, current state
> active+undersized+degraded+remapped+backfill_toofull, last acting
> [13,5]
> [WRN] PG_NOT_DEEP_SCRUBBED: 21 pgs not deep-scrubbed in time
>     pg 3.75 not deep-scrubbed since 2023-11-09T21:26:58.057287+0000
>     pg 3.6d not deep-scrubbed since 2023-11-14T22:51:12.464463+0000
>     pg 3.6c not deep-scrubbed since 2023-11-11T16:54:13.940623+0000
>     pg 3.5d not deep-scrubbed since 2023-11-12T14:45:24.377322+0000
>     pg 3.57 not deep-scrubbed since 2023-11-12T11:13:12.897755+0000
>     pg 3.56 not deep-scrubbed since 2023-11-13T16:37:11.865479+0000
>     pg 3.53 not deep-scrubbed since 2023-11-11T07:31:11.837450+0000
>     pg 3.4a not deep-scrubbed since 2023-11-13T23:20:30.121413+0000
>     pg 3.49 not deep-scrubbed since 2023-11-15T00:10:04.825296+0000
>     pg 3.3b not deep-scrubbed since 2023-11-13T20:32:17.338096+0000
>     pg 3.39 not deep-scrubbed since 2023-11-15T06:01:18.346350+0000
>     pg 3.35 not deep-scrubbed since 2023-11-08T17:47:01.511603+0000
>     pg 3.34 not deep-scrubbed since 2023-11-15T19:45:02.148231+0000
>     pg 3.31 not deep-scrubbed since 2023-11-15T15:34:01.510935+0000
>     pg 3.17 not deep-scrubbed since 2023-11-15T03:29:14.419442+0000
>     pg 3.12 not deep-scrubbed since 2023-11-09T09:41:32.171837+0000
>     pg 2.2 not deep-scrubbed since 2023-11-10T03:02:25.248648+0000
>     pg 3.2 not deep-scrubbed since 2023-11-14T20:25:27.750532+0000
>     pg 3.c not deep-scrubbed since 2023-11-15T18:47:44.742320+0000
>     pg 3.27 not deep-scrubbed since 2023-11-14T16:33:14.652728+0000
>     pg 3.2a not deep-scrubbed since 2023-11-15T18:01:21.875230+0000
> [WRN] PG_NOT_SCRUBBED: 21 pgs not scrubbed in time
>     pg 3.75 not scrubbed since 2023-11-14T23:02:21.867641+0000
>     pg 3.6d not scrubbed since 2023-11-14T22:51:12.464463+0000
>     pg 3.6c not scrubbed since 2023-11-15T22:35:52.110113+0000
>     pg 3.5d not scrubbed since 2023-11-15T06:14:24.294473+0000
>     pg 3.57 not scrubbed since 2023-11-15T06:58:50.453749+0000
>     pg 3.56 not scrubbed since 2023-11-14T22:27:28.762497+0000
>     pg 3.53 not scrubbed since 2023-11-15T12:50:43.604679+0000
>     pg 3.4a not scrubbed since 2023-11-15T07:17:50.225197+0000
>     pg 3.49 not scrubbed since 2023-11-15T00:10:04.825296+0000
>     pg 3.3b not scrubbed since 2023-11-14T23:39:36.602972+0000
>     pg 3.39 not scrubbed since 2023-11-15T06:01:18.346350+0000
>     pg 3.35 not scrubbed since 2023-11-15T06:29:59.408409+0000
>     pg 3.34 not scrubbed since 2023-11-15T19:45:02.148231+0000
>     pg 3.31 not scrubbed since 2023-11-15T15:34:01.510935+0000
>     pg 3.17 not scrubbed since 2023-11-15T03:29:14.419442+0000
>     pg 3.12 not scrubbed since 2023-11-15T20:05:23.103069+0000
>     pg 2.2 not scrubbed since 2023-11-15T05:46:04.363718+0000
>     pg 3.2 not scrubbed since 2023-11-14T20:25:27.750532+0000
>     pg 3.c not scrubbed since 2023-11-15T18:47:44.742320+0000
>     pg 3.27 not scrubbed since 2023-11-15T21:09:57.747494+0000
>     pg 3.2a not scrubbed since 2023-11-15T18:01:21.875230+0000
> [WRN] POOL_NEARFULL: 3 pool(s) nearfull
>     pool '.mgr' is nearfull
>     pool 'cephfs.storage.meta' is nearfull
>     pool 'cephfs.storage.data' is nearfull
>
> Any ideas ?
>
> Thanks,
>
> Manolis Daramas
>
> -----Original Message-----
> From: Eugen Block <eblock@xxxxxx>
> Sent: Tuesday, November 21, 2023 1:10 PM
> To: ceph-users@xxxxxxx
> Subject:  Re: After hardware failure tried to recover
> ceph and followed instructions for recovery using OSDS
>
> Hi,
>
> I guess you could just redeploy the third MON which fails to start
> (after the orchestrator is responding again) unless you figured it out
> already. What is it logging?
>
>> 1 osds exist in the crush map but not in the osdmap
>
> This could be due to the input/output error, but it's just a guess:
>
>> osd.10  : 9225 osdmaps trimmed, 0 osdmaps added.
>> Mount failed with '(5) Input/output error'
>
> Can you add the 'ceph osd tree' output?
>
>> # ceph fs ls (output below):
>> No filesystems enabled
>
> Ceph doesn't report active MDS daemons, there are two processes
> listed, one on node01, the other on node02. What are those daemons
> logging?
>
>> It looks like that we have a problem with the orchestrator now
>> (we've lost cephadm orchestrator) and we also cannot see the
>> filesystem.
>
> Depending on the cluster status the orchestrator might not behave as
> expected, and HEALTH_ERR isn't too good, of course. But you could try
> to do a 'ceph mgr fail' and see if it reacts again.
>
> Zitat von Manolis Daramas <mdaramas@xxxxxxxxxxxx>:
>
>> Hello everyone,
>>
>> We had a recent power failure on a server which hosts a 3-node ceph
>> cluster (with Ubuntu 20.04 and Ceph version 17.2.7) and we think
>> that we may have lost some of our data if not all of them.
>>
>> We have followed the instructions on
>> https://docs.ceph.com/en/reef/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds but with
>> no
>> luck.
>>
>> We have kept a backup of store.db folder on all 3 nodes prior the
>> below steps.
>>
>> We have stopped ceph.target on all 3 nodes.
>>
>> We have run the first part of the script and we have altered it
>> according to our configuration:
>>
>> ms=/root/mon-store
>> mkdir $ms
>>
>> hosts="node01 node02 node03"
>> # collect the cluster map from stopped OSDs
>> for host in $hosts; do
>>   rsync -avz $ms/. root@$host:$ms.remote
>>   rm -rf $ms
>>   ssh root@$host <<EOF
>>     for osd in /var/lib/ceph/be4304e4-b0d5-11ec-8c6a-2965d4229f37/osd*; do
>>       ceph-objectstore-tool --data-path \$osd --no-mon-config --op
>> update-mon-db --mon-store-path $ms.remote
>>     done
>> EOF
>>   rsync -avz root@$host:$ms.remote/. $ms
>> done
>>
>> and the results were:
>>
>> for node01
>>
>> sd.0   : 0 osdmaps trimmed, 673 osdmaps added.
>> osd.10  : 9225 osdmaps trimmed, 0 osdmaps added.
>> Mount failed with '(5) Input/output error'
>> osd.4   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.8   : 0 osdmaps trimmed, 0 osdmaps added.
>> receiving incremental file list
>> created directory /root/mon-store
>> ./
>> kv_backend
>> store.db/
>> store.db/000008.sst
>> store.db/000014.sst
>> store.db/000020.sst
>> store.db/000022.log
>> store.db/CURRENT
>> store.db/IDENTITY
>> store.db/LOCK
>> store.db/MANIFEST-000021
>> store.db/OPTIONS-000018
>> store.db/OPTIONS-000024
>>
>> sent 248 bytes  received 286,474 bytes  191,148.00 bytes/sec
>> total size is 7,869,025  speedup is 27.44
>> sending incremental file list
>> created directory /root/mon-store.remote
>> ./
>> kv_backend
>> store.db/
>> store.db/000008.sst
>> store.db/000014.sst
>> store.db/000020.sst
>> store.db/000022.log
>> store.db/CURRENT
>> store.db/IDENTITY
>> store.db/LOCK
>> store.db/MANIFEST-000021
>> store.db/OPTIONS-000018
>> store.db/OPTIONS-000024
>>
>> sent 286,478 bytes  received 285 bytes  191,175.33 bytes/sec
>> total size is 7,869,025  speedup is 27.44
>>
>> for node02
>>
>> osd.12  : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.2   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.5   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.7   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.9   : 0 osdmaps trimmed, 0 osdmaps added.
>> receiving incremental file list
>> created directory /root/mon-store
>> ./
>> kv_backend
>> store.db/
>> store.db/000008.sst
>> store.db/000014.sst
>> store.db/000020.sst
>> store.db/000026.sst
>> store.db/000032.sst
>> store.db/000038.sst
>> store.db/000044.sst
>> store.db/000050.sst
>> store.db/000052.log
>> store.db/CURRENT
>> store.db/IDENTITY
>> store.db/LOCK
>> store.db/MANIFEST-000051
>> store.db/OPTIONS-000048
>> store.db/OPTIONS-000054
>>
>> sent 343 bytes  received 291,082 bytes  194,283.33 bytes/sec
>> total size is 7,875,746  speedup is 27.02
>> sending incremental file list
>> created directory /root/mon-store.remote
>> ./
>> kv_backend
>> store.db/
>> store.db/000008.sst
>> store.db/000014.sst
>> store.db/000020.sst
>> store.db/000026.sst
>> store.db/000032.sst
>> store.db/000038.sst
>> store.db/000044.sst
>> store.db/000050.sst
>> store.db/000052.log
>> store.db/CURRENT
>> store.db/IDENTITY
>> store.db/LOCK
>> store.db/MANIFEST-000051
>> store.db/OPTIONS-000048
>> store.db/OPTIONS-000054
>>
>> sent 291,078 bytes  received 380 bytes  582,916.00 bytes/sec
>> total size is 7,875,746  speedup is 27.02
>>
>> for node03
>>
>> osd.1   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.11  : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.13  : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.3   : 0 osdmaps trimmed, 0 osdmaps added.
>> osd.6   : 0 osdmaps trimmed, 0 osdmaps added.
>> receiving incremental file list
>> created directory /root/mon-store
>> ./
>> kv_backend
>> store.db/
>> store.db/000008.sst
>> store.db/000014.sst
>> store.db/000020.sst
>> store.db/000026.sst
>> store.db/000032.sst
>> store.db/000038.sst
>> store.db/000044.sst
>> store.db/000050.sst
>> store.db/000056.sst
>> store.db/000062.sst
>> store.db/000068.sst
>> store.db/000074.sst
>> store.db/000080.sst
>> store.db/000082.log
>> store.db/CURRENT
>> store.db/IDENTITY
>> store.db/LOCK
>> store.db/MANIFEST-000081
>> store.db/OPTIONS-000078
>> store.db/OPTIONS-000084
>>
>> sent 438 bytes  received 295,659 bytes  592,194.00 bytes/sec
>> total size is 7,882,477  speedup is 26.62
>>
>> Then we have run the (in order to rebuild the monstore DB and fix it):
>>
>>
>> ceph-monstore-tool /root/mon-store rebuild -- --keyring
>> /etc/ceph/ceph.client.admin.keyring --mon-ids node01 node02 node03
>>
>>
>>
>> and the output is below:
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: RocksDB
>> version: 6.15.5
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: Git sha
>> rocksdb_build_git_sha:@0@
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: Compile date
>> Oct 25 2023
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: DB SUMMARY
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: DB Session ID:
>>  OS2T69IQ02SU5OKHBI40
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: CURRENT file:  CURRENT
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: IDENTITY file:
>>  IDENTITY
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: MANIFEST file:
>>  MANIFEST-000081 size: 1083 Bytes
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: SST files in
>> /root/mon-store/store.db dir, Total Num: 13, files: 000008.sst
>> 000014.sst 000020.sst 000026.sst 000032.sst 000038.sst 000044.sst
>> 000050.sst 000056.sst
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: Write Ahead
>> Log file in /root/mon-store/store.db: 000082.log size: 244 ;
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>          Options.error_if_exists: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.create_if_missing: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>          Options.paranoid_checks: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                Options.track_and_verify_wals_in_manifest: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                      Options.env: 0x56017c8d1c20
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                       Options.fs: Posix File System
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                 Options.info_log: 0x56017d4c3860
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_file_opening_threads: 16
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>               Options.statistics: (nil)
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                Options.use_fsync: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.max_log_file_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.max_manifest_file_size: 1073741824
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.log_file_time_to_roll: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.keep_log_file_num: 1000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.recycle_log_file_num: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>          Options.allow_fallocate: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.allow_mmap_reads: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.allow_mmap_writes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.use_direct_reads: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.use_direct_io_for_flush_and_compaction: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.create_missing_column_families: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>               Options.db_log_dir:
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                  Options.wal_dir: /root/mon-store/store.db
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.table_cache_numshardbits: 6
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>          Options.WAL_ttl_seconds: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.WAL_size_limit_MB: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.max_write_batch_group_size_bytes: 1048576
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.manifest_preallocation_size: 4194304
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>      Options.is_fd_close_on_exec: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.advise_random_on_open: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.db_write_buffer_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.write_buffer_manager: 0x56017d1f6a20
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.access_hint_on_compaction_start: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.new_table_reader_for_compaction_inputs: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.random_access_max_buffer_size: 1048576
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>       Options.use_adaptive_mutex: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>             Options.rate_limiter: (nil)
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.sst_file_manager.rate_bytes_per_sec: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.wal_recovery_mode: 2
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.enable_thread_tracking: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.enable_pipelined_write: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.unordered_write: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.allow_concurrent_memtable_write: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.enable_write_thread_adaptive_yield: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.write_thread_max_yield_usec: 100
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.write_thread_slow_yield_usec: 3
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                Options.row_cache: None
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>               Options.wal_filter: None
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.avoid_flush_during_recovery: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.allow_ingest_behind: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.preserve_deletes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.two_write_queues: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.manual_wal_flush: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.atomic_flush: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.avoid_unnecessary_blocking_io: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.persist_stats_to_disk: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.write_dbid_to_manifest: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.log_readahead_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.file_checksum_gen_factory: Unknown
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.best_efforts_recovery: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bgerror_resume_count: 2147483647
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bgerror_resume_retry_interval: 1000000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.allow_data_in_errors: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.db_host_id: __hostname__
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_background_jobs: 2
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_background_compactions: -1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_subcompactions: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.avoid_flush_during_shutdown: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.writable_file_max_buffer_size: 1048576
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.delayed_write_rate : 16777216
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_total_wal_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.delete_obsolete_files_period_micros: 21600000000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.stats_dump_period_sec: 600
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.stats_persist_period_sec: 600
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.stats_history_buffer_size: 1048576
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>           Options.max_open_files: -1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>           Options.bytes_per_sync: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>       Options.wal_bytes_per_sync: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.strict_bytes_per_sync: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_readahead_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.max_background_flushes: -1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: Compression
>> algorithms supported:
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kZSTDNotFinalCompression supported: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:   kZSTD supported: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kXpressCompression supported: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kLZ4HCCompression supported: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kLZ4Compression supported: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kBZip2Compression supported: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kZlibCompression supported: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> kSnappyCompression supported: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb: Fast CRC32
>> supported: Supported on x86
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> [db/version_set.cc:4724] Recovering from manifest file:
>> /root/mon-store/store.db/MANIFEST-000081
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> [db/column_family.cc:595] --------------- Options for column family
>> [default]:
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.comparator: leveldb.BytewiseComparator
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.merge_operator:
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_filter: None
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_filter_factory: None
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.sst_partitioner_factory: None
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.memtable_factory: SkipListFactory
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.table_factory: BlockBasedTable
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> table_factory options:   flush_block_policy_factory:
>> FlushBlockBySizePolicyFactory (0x56017d234f80)
>>
>>   cache_index_and_filter_blocks: 1
>>
>>   cache_index_and_filter_blocks_with_high_priority: 0
>>
>>   pin_l0_filter_and_index_blocks_in_cache: 0
>>
>>   pin_top_level_index_and_filter: 1
>>
>>   index_type: 0
>>
>>   data_block_index_type: 0
>>
>>   index_shortening: 1
>>
>>   data_block_hash_table_util_ratio: 0.750000
>>
>>   hash_index_allow_collision: 1
>>
>>   checksum: 1
>>
>>   no_block_cache: 0
>>
>>   block_cache: 0x56017d22f610
>>
>>   block_cache_name: BinnedLRUCache
>>
>>   block_cache_options:
>>
>>     capacity : 536870912
>>
>>     num_shard_bits : 4
>>
>>     strict_capacity_limit : 0
>>
>>     high_pri_pool_ratio: 0.000
>>
>>   block_cache_compressed: (nil)
>>
>>   persistent_cache: (nil)
>>
>>   block_size: 4096
>>
>>   block_size_deviation: 10
>>
>>   block_restart_interval: 16
>>
>>   index_block_restart_interval: 1
>>
>>   metadata_block_size: 4096
>>
>>   partition_filters: 0
>>
>>   use_delta_encoding: 1
>>
>>   filter_policy: rocksdb.BuiltinBloomFilter
>>
>>   whole_key_filtering: 1
>>
>>   verify_compression: 0
>>
>>   read_amp_bytes_per_bit: 0
>>
>>   format_version: 4
>>
>>   enable_index_compression: 1
>>
>>   block_align: 0
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.write_buffer_size: 33554432
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_write_buffer_number: 2
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression: NoCompression
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.bottommost_compression: Disabled
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.prefix_extractor: nullptr
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.memtable_insert_with_hint_prefix_extractor: nullptr
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.num_levels: 7
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.min_write_buffer_number_to_merge: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_write_buffer_number_to_maintain: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_write_buffer_size_to_maintain: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bottommost_compression_opts.window_bits: -14
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.bottommost_compression_opts.level: 32767
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bottommost_compression_opts.strategy: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bottommost_compression_opts.max_dict_bytes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bottommost_compression_opts.zstd_max_train_bytes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.bottommost_compression_opts.parallel_threads: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.bottommost_compression_opts.enabled: false
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression_opts.window_bits: -14
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.compression_opts.level: 32767
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression_opts.strategy: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression_opts.max_dict_bytes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression_opts.zstd_max_train_bytes: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compression_opts.parallel_threads: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>   Options.compression_opts.enabled: false
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.level0_file_num_compaction_trigger: 4
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.level0_slowdown_writes_trigger: 20
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.level0_stop_writes_trigger: 36
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.target_file_size_base: 67108864
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.target_file_size_multiplier: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_base: 268435456
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.level_compaction_dynamic_level_bytes: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier: 10.000000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[0]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[1]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[2]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[3]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[4]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[5]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_bytes_for_level_multiplier_addtl[6]: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.max_sequential_skip_in_iterations: 8
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.max_compaction_bytes: 1677721600
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.arena_block_size: 4194304
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.soft_pending_compaction_bytes_limit: 68719476736
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.hard_pending_compaction_bytes_limit: 274877906944
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.rate_limit_delay_max_milliseconds: 100
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.disable_auto_compactions: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.compaction_style: kCompactionStyleLevel
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>           Options.compaction_pri: kMinOverlappingRatio
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.size_ratio: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.min_merge_width: 2
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.max_merge_width: 4294967295
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.max_size_amplification_percent:
>> 200
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.compression_size_percent: -1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_universal.stop_style:
>> kCompactionStopStyleTotalSize
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_fifo.max_table_files_size: 1073741824
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.compaction_options_fifo.allow_compaction: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.table_properties_collectors:
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>    Options.inplace_update_support: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>  Options.inplace_update_num_locks: 10000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.memtable_prefix_bloom_size_ratio: 0.000000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.memtable_whole_key_filtering: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.memtable_huge_page_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>            Options.bloom_locality: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.max_successive_merges: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.optimize_filters_for_hits: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.paranoid_file_checks: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.force_consistency_checks: 1
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.report_bg_io_stats: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>                Options.ttl: 2592000
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.periodic_compaction_seconds: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>     Options.enable_blob_files: false
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>         Options.min_blob_size: 0
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>>        Options.blob_file_size: 268435456
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.blob_compression_type: NoCompression
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.enable_blob_garbage_collection: false
>>
>>
>>
>> 2023-11-17T12:26:24.152+0200 7f482b393600  4 rocksdb:
>> Options.blob_garbage_collection_age_cutoff: 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.156+0200 7f482b393600  4 rocksdb:
>> [db/version_set.cc:4764] Recovered from manifest
>> file:/root/mon-store/store.db/MANIFEST-000081
>> succeeded,manifest_file_number is 81, next_file_number is 83,
>> last_sequence is 21183, log_number is 77,prev_log_number is
>> 0,max_column_family is 0,min_log_number_to_keep is 0
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.156+0200 7f482b393600  4 rocksdb:
>> [db/version_set.cc:4779] Column family [default] (ID 0), log number
>> is 77
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.156+0200 7f482b393600  4 rocksdb:
>> [db/version_set.cc:4082] Creating manifest 85
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784162798, "job": 1, "event":
>> "recovery_started", "wal_files": [82]}
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb:
>> [db/db_impl/db_impl_open.cc:845] Recovering log #82 mode 2
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  3 rocksdb:
>> [table/block_based/filter_policy.cc:991] Using legacy Bloom filter
>> with high (20) bits/key. Dramatic filter space and/or accuracy
>> improvement is available with format_version>=5.
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784163944, "cf_name": "default", "job": 1,
>> "event": "table_file_creation", "file_number": 86, "file_size":
>> 1266, "file_checksum": "", "file_checksum_func_name": "Unknown",
>> "table_properties": {"data_size": 238, "index_size": 40,
>> "index_partitions": 0, "top_level_index_size": 0,
>> "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1,
>> "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 24,
>> "raw_value_size": 148, "raw_average_value_size": 49,
>> "num_data_blocks": 1, "num_entries": 3, "num_deletions": 0,
>> "num_merge_operands": 0, "num_range_deletions": 0, "format_version":
>> 0, "fixed_key_len": 0, "filter_policy":
>> "rocksdb.BuiltinBloomFilter", "column_family_name": "default",
>> "column_family_id": 0, "comparator": "leveldb.BytewiseComparator",
>> "merge_operator": "", "prefix_extractor_name": "nullptr",
>> "property_collectors": "[]", "compression": "NoCompression",
>> "compression_options": "wind
>>  ow_bits=-14; level=32767; strategy=0; max_dict_bytes=0;
>> zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1700216784,
>> "oldest_key_time": 0, "file_creation_time": 0, "db_id":
>> "53025a24-2059-43e1-a0f7-a87a28e33d38", "db_session_id":
>> "OS2T69IQ02SU5OKHBI40"}}
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb:
>> [db/version_set.cc:4082] Creating manifest 87
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784166273, "job": 1, "event":
>> "recovery_finished"}
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb:
>> [db/column_family.cc:983] [default] Increasing compaction threads
>> because we have 14 level-0 files
>>
>>
>>
>> 2023-11-17T12:26:24.160+0200 7f482b393600  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000082.log immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f482b393600  4 rocksdb:
>> [db/db_impl/db_impl_open.cc:1700] SstFileManager instance
>> 0x56017d230700
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f482b393600  4 rocksdb: DB pointer
>> 0x56017df56000
>>
>>
>>
>> adding auth for 'client.admin':
>> auth(key=AQCsdUViHYjTGBAAf7/1KYZjb0h3x3EOywqbbQ==) with
>> caps({mds=allow *,mgr=allow *,mon=allow *,osd=allow *})
>>
>> 2023-11-17T12:26:24.164+0200 7f482a349700  4 rocksdb:
>> [db/compaction/compaction_job.cc:1881] [default] [JOB 3] Compacting
>> 14@0 files to L6, score 3.50
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f482a349700  4 rocksdb:
>> [db/compaction/compaction_job.cc:1887] [default] Compaction start
>> summary: Base version 3 Base level 0, inputs: [86(1266B) 80(1266B)
>> 74(1267B) 68(1267B) 62(1266B) 56(1265B) 50(1265B) 44(1265B)
>> 38(1265B) 32(1266B) 26(1265B) 20(1265B) 14(283KB) 8(7387KB)]
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784169200, "job": 3, "event":
>> "compaction_started", "compaction_reason": "LevelL0FilesNum",
>> "files_L0": [86, 80, 74, 68, 62, 56, 50, 44, 38, 32, 26, 20, 14, 8],
>> "score": 3.5, "input_data_size": 7870219}
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f4822339700  4 rocksdb:
>> [db/db_impl/db_impl.cc:901] ------- DUMPING STATS -------
>>
>>
>>
>> 2023-11-17T12:26:24.164+0200 7f4822339700  4 rocksdb:
>> [db/db_impl/db_impl.cc:903]
>>
>> ** DB Stats **
>>
>> Uptime(secs): 0.0 total, 0.0 interval
>>
>> Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per
>> commit group, ingest: 0.00 GB, 0.00 MB/s
>>
>> Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written:
>> 0.00 GB, 0.00 MB/s
>>
>> Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
>>
>> Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per
>> commit group, ingest: 0.00 MB, 0.00 MB/s
>>
>> Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00
>> MB, 0.00 MB/s
>>
>> Interval stall: 00:00:0.000 H:M:S, 0.0 percent
>>
>>
>>
>> ** Compaction Stats [default] **
>>
>> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB)
>> Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
>> CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
>>
>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>>   L0     14/14   7.51 MB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   1.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>>  Sum     14/14   7.51 MB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   1.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   1.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>>
>>
>> ** Compaction Stats [default] **
>>
>> Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB)
>> Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
>> CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
>>
>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>> User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   0.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>> Uptime(secs): 0.0 total, 0.0 interval
>>
>> Flush(GB): cumulative 0.000, interval 0.000
>>
>> AddFile(GB): cumulative 0.000, interval 0.000
>>
>> AddFile(Total Files): cumulative 0, interval 0
>>
>> AddFile(L0 Files): cumulative 0, interval 0
>>
>> AddFile(Keys): cumulative 0, interval 0
>>
>> Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read,
>> 0.00 MB/s read, 0.0 seconds
>>
>> Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read,
>> 0.00 MB/s read, 0.0 seconds
>>
>> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction,
>> 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for
>> pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0
>> memtable_compaction, 0 memtable_slowdown, interval 0 total count
>>
>>
>>
>> ** File Read Latency Histogram By Level [default] **
>>
>>
>>
>> ** Compaction Stats [default] **
>>
>> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB)
>> Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
>> CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
>>
>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>>   L0     14/14   7.51 MB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   1.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>>  Sum     14/14   7.51 MB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   1.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   0.0      0.0      0.0      0.00              0.00
>>         0    0.000       0      0
>>
>>
>>
>> ** Compaction Stats [default] **
>>
>> Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB)
>> Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
>> CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
>>
>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>> User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
>>    0.0       0.0   0.0      0.0      1.1      0.00              0.00
>>         1    0.001       0      0
>>
>> Uptime(secs): 0.0 total, 0.0 interval
>>
>> Flush(GB): cumulative 0.000, interval 0.000
>>
>> AddFile(GB): cumulative 0.000, interval 0.000
>>
>> AddFile(Total Files): cumulative 0, interval 0
>>
>> AddFile(L0 Files): cumulative 0, interval 0
>>
>> AddFile(Keys): cumulative 0, interval 0
>>
>> Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read,
>> 0.00 MB/s read, 0.0 seconds
>>
>> Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read,
>> 0.00 MB/s read, 0.0 seconds
>>
>> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction,
>> 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for
>> pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0
>> memtable_compaction, 0 memtable_slowdown, interval 0 total count
>>
>>
>>
>> ** File Read Latency Histogram By Level [default] **
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.208+0200 7f482a349700  4 rocksdb:
>> [db/compaction/compaction_job.cc:1516] [default] [JOB 3] Generated
>> table #91: 1366 keys, 7566988 bytes
>>
>>
>>
>> 2023-11-17T12:26:24.208+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784213586, "cf_name": "default", "job": 3,
>> "event": "table_file_creation", "file_number": 91, "file_size":
>> 7566988, "file_checksum": "", "file_checksum_func_name": "Unknown",
>> "table_properties": {"data_size": 7541895, "index_size": 20610,
>> "index_partitions": 0, "top_level_index_size": 0,
>> "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1,
>> "filter_size": 3525, "raw_key_size": 29308, "raw_average_key_size":
>> 21, "raw_value_size": 7503048, "raw_average_value_size": 5492,
>> "num_data_blocks": 764, "num_entries": 1366, "num_deletions": 0,
>> "num_merge_operands": 0, "num_range_deletions": 0, "format_version":
>> 0, "fixed_key_len": 0, "filter_policy":
>> "rocksdb.BuiltinBloomFilter", "column_family_name": "default",
>> "column_family_id": 0, "comparator": "leveldb.BytewiseComparator",
>> "merge_operator": "", "prefix_extractor_name": "nullptr",
>> "property_collectors": "[]", "compression": "NoCompression", "c
>>  ompression_options": "window_bits=-14; level=32767; strategy=0;
>> max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ",
>> "creation_time": 1700216681, "oldest_key_time": 0,
>> "file_creation_time": 1700216784, "db_id":
>> "53025a24-2059-43e1-a0f7-a87a28e33d38", "db_session_id":
>> "OS2T69IQ02SU5OKHBI40"}}
>>
>>
>>
>> 2023-11-17T12:26:24.208+0200 7f482a349700  4 rocksdb:
>> [db/compaction/compaction_job.cc:1594] [default] [JOB 3] Compacted
>> 14@0 files to L6 => 7566988 bytes
>>
>>
>>
>> 2023-11-17T12:26:24.208+0200 7f482a349700  4 rocksdb:
>> [db/version_set.cc:3457] More existing levels in DB than needed.
>> max_bytes_for_level_multiplier may not be guaranteed.
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: (Original Log
>> Time 2023/11/17-12:26:24.215298)
>> [db/compaction/compaction_job.cc:812] [default] compacted to: base
>> level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0
>> 0 0 0 1] max score 0.00, MB/sec: 177.1 rd, 170.3 wr, level 6, files
>> in(14, 0) out(1) MB in(7.5, 0.0) out(7.2), read-write-amplify(2.0)
>> write-amplify(1.0) OK, records in: 19842, records dropped: 18476
>> output_compression: NoCompression
>>
>>
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: (Original Log
>> Time 2023/11/17-12:26:24.215314) EVENT_LOG_v1 {"time_micros":
>> 1700216784215306, "job": 3, "event": "compaction_finished",
>> "compaction_time_micros": 44437, "compaction_time_cpu_micros":
>> 40923, "output_level": 6, "num_output_files": 1,
>> "total_output_size": 7566988, "num_input_records": 19842,
>> "num_output_records": 1366, "num_subcompactions": 1,
>> "output_compression": "NoCompression",
>> "num_single_delete_mismatches": 0, "num_single_delete_fallthrough":
>> 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000086.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215520, "job": 3, "event":
>> "table_file_deletion", "file_number": 86}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000080.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215570, "job": 3, "event":
>> "table_file_deletion", "file_number": 80}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000074.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215603, "job": 3, "event":
>> "table_file_deletion", "file_number": 74}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000068.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215641, "job": 3, "event":
>> "table_file_deletion", "file_number": 68}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000062.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215672, "job": 3, "event":
>> "table_file_deletion", "file_number": 62}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000056.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215708, "job": 3, "event":
>> "table_file_deletion", "file_number": 56}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000050.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215739, "job": 3, "event":
>> "table_file_deletion", "file_number": 50}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000044.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215772, "job": 3, "event":
>> "table_file_deletion", "file_number": 44}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000038.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215804, "job": 3, "event":
>> "table_file_deletion", "file_number": 38}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000032.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215831, "job": 3, "event":
>> "table_file_deletion", "file_number": 32}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000026.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215858, "job": 3, "event":
>> "table_file_deletion", "file_number": 26}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000020.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215888, "job": 3, "event":
>> "table_file_deletion", "file_number": 20}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000014.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784215952, "job": 3, "event":
>> "table_file_deletion", "file_number": 14}
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb:
>> [file/delete_scheduler.cc:69] Deleted file
>> /root/mon-store/store.db/000008.sst immediately, rate_bytes_per_sec
>> 0, total_trash_size 0 max_trash_db_ratio 0.250000
>>
>>
>>
>> 2023-11-17T12:26:24.212+0200 7f482a349700  4 rocksdb: EVENT_LOG_v1
>> {"time_micros": 1700216784216804, "job": 3, "event":
>> "table_file_deletion", "file_number": 8}
>>
>>
>>
>> update_mkfs generating seed initial monmap
>>
>> epoch 0
>>
>> fsid be4304e4-b0d5-11ec-8c6a-2965d4229f37
>>
>> last_changed 2023-11-17T12:26:24.222814+0200
>>
>> created 2023-11-17T12:26:24.222814+0200
>>
>> min_mon_release 0 (unknown)
>>
>> election_strategy: 1
>>
>> 0: [v2:10.40.99.11:3300/0,v1:10.40.99.11:6789/0] mon.node01
>>
>> 1: [v2:10.40.99.12:3300/0,v1:10.40.99.12:6789/0] mon.node02
>>
>> 2: [v2:10.40.99.13:3300/0,v1:10.40.99.13:6789/0] mon.node03
>>
>> 2023-11-17T12:26:24.220+0200 7f482b393600  4 rocksdb:
>> [db/db_impl/db_impl.cc:446] Shutdown: canceling all background work
>>
>>
>>
>> 2023-11-17T12:26:24.220+0200 7f482b393600  4 rocksdb:
>> [db/db_impl/db_impl.cc:625] Shutdown complete
>>
>>
>>
>>
>> Then we copied the /root/mon-store/store.db folder across on all 3
>> nodes and tried to start ceph.target service again.
>>
>> The output on node01 is below:
>>
>> d31781fa6b4c   quay.io/ceph/ceph
>> "/usr/bin/ceph-mds -..."   55 minutes ago   Up 55 minutes
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mds-storage-node01-cjrvjc
>> e385c32651d2   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-10
>> 904f522c4cb5   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-0
>> 033edf99a98e   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-4
>> 70344a6e87a0   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-8
>> 905b782aedcf   quay.io/prometheus/prometheus:v2.43.0
>> "/bin/prometheus --c..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-prometheus-node01
>> ff191654eb3e   quay.io/prometheus/node-exporter:v1.5.0
>> "/bin/node_exporter ..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-node-exporter-node01
>> 459c46f4bdb7   quay.io/ceph/ceph
>> "/usr/bin/ceph-mgr -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mgr-node01-xlciyx
>> cacfe8abcbbf   quay.io/ceph/ceph
>> "/usr/bin/ceph-crash..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-crash-node01
>> e216ef2af166   quay.io/prometheus/alertmanager:v0.25.0
>> "/bin/alertmanager -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-alertmanager-node01
>> d3238b2285d1   quay.io/ceph/ceph-grafana:9.4.7           "/bin/sh -c
>> 'grafana..."   2 days ago       Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-grafana-node01
>>
>> The output on node02 is below:
>>
>> 2aec62685dee   quay.io/ceph/ceph
>> "/usr/bin/ceph-mds -..."   54 minutes ago   Up 54 minutes
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mds-storage-node02-lyudbp
>> 249b04f32f8c   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-5
>> a2c96f56b517   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-2
>> 87496d374a29   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-12
>> 55fe47765917   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-9
>> 76171e25dbde   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-7
>> 220472e8c1bf   quay.io/ceph/ceph
>> "/usr/bin/ceph-mgr -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mgr-node02-gudauu
>> 0c783e73e543   quay.io/prometheus/node-exporter:v1.5.0
>> "/bin/node_exporter ..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-node-exporter-node02
>> 4e638003fa2e   quay.io/ceph/ceph
>> "/usr/bin/ceph-crash..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-crash-node02
>> 42719d5cfdbf   quay.io/ceph/ceph
>> "/usr/bin/ceph-mon -..."   2 days ago       Up 2 days
>>  ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mon-node02
>>
>> The output on node03 is below:
>>
>> 7e5879dce643   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-11
>> d53996ff33b9   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-3
>> e1ac5a8b87d3   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-1
>> f4cda871218d   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-13
>> 969e670dc47c   quay.io/ceph/ceph
>> "/usr/bin/ceph-osd -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-osd-6
>> a49e91a7bb8e   quay.io/prometheus/node-exporter:v1.5.0
>> "/bin/node_exporter ..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-node-exporter-node03
>> 835c3893a3f4   quay.io/ceph/ceph
>> "/usr/bin/ceph-crash..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-crash-node03
>> bfa6f5b989ea   quay.io/ceph/ceph
>> "/usr/bin/ceph-mon -..."   2 days ago      Up 2 days
>> ceph-be4304e4-b0d5-11ec-8c6a-2965d4229f37-mon-node03
>>
>>
>> # ceph -s (output below):
>>
>> cluster:
>>     id:     be4304e4-b0d5-11ec-8c6a-2965d4229f37
>>     health: HEALTH_ERR
>>             20 stray daemon(s) not managed by cephadm
>>             3 stray host(s) with 20 daemon(s) not managed by cephadm
>>             1/3 mons down, quorum node02,node03
>>             1/523510 objects unfound (0.000%)
>>             3 nearfull osd(s)
>>             1 osds exist in the crush map but not in the osdmap
>>             Low space hindering backfill (add storage if this
>> doesn't resolve itself): 20 pgs backfill_toofull
>>             Possible data damage: 1 pg recovery_unfound
>>             Degraded data redundancy: 74666/1570530 objects degraded
>> (4.754%), 21 pgs degraded, 21 pgs undersized
>>             3 pool(s) nearfull
>>
>>   services:
>>     mon: 3 daemons, quorum node02,node03 (age 2d), out of quorum: node01
>>     mgr: node01.xlciyx(active, since 2d), standbys: node02.gudauu
>>     osd: 14 osds: 14 up (since 2d), 14 in (since 3d); 21 remapped pgs
>>
>>   data:
>>     pools:   3 pools, 161 pgs
>>     objects: 523.51k objects, 299 GiB
>>     usage:   1014 GiB used, 836 GiB / 1.8 TiB avail
>>     pgs:     74666/1570530 objects degraded (4.754%)
>>              1/523510 objects unfound (0.000%)
>>              140 active+clean
>>              20  active+undersized+degraded+remapped+backfill_toofull
>>
>>   1.  active+recovery_unfound+undersized+degraded+remapped
>>
>> # ceph fs ls (output below):
>> No filesystems enabled
>>
>> It looks like that we have a problem with the orchestrator now
>> (we've lost cephadm orchestrator) and we also cannot see the
>> filesystem.
>>
>>
>> May you please assist since we are not able to mount the filesystem ?
>>
>>
>> Thank you,
>>
>> Manolis Daramas
>>
>>
>> Under the General Data Protection Regulation (GDPR) (EU) 2016/679,
>> Motivian as Data Controller has a legal duty to protect any
>> information collected from you via email. Information contained in
>> this email and any attachments may be privileged or confidential and
>> intended for the exclusive use of the original recipient. If you
>> have received this email by mistake, please advise the sender
>> immediately and delete the email, including emptying your deleted
>> email box. Information included in this email is reserved to named
>> addressee's eyes only. You may not share this message or any of its
>> attachments to anyone. Please note that as the recipient, it is your
>> responsibility to check the email for malicious software. Motivian
>> puts the security of the client at a high priority. Therefore, we
>> have put efforts into ensuring that the message is error and
>> virus-free. Unfortunately, full security of the email cannot be
>> ensured as, despite our efforts, the data included in emails could
>> be infected,
>>  intercepted, or corrupted. Therefore, the recipient should check
>> the email for threats with proper software, as the sender does not
>> accept liability for any damage inflicted by viewing the content of
>> this email.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> Under the General Data Protection Regulation (GDPR) (EU) 2016/679,
> Motivian as Data Controller has a legal duty to protect any
> information collected from you via email. Information contained in
> this email and any attachments may be privileged or confidential and
> intended for the exclusive use of the original recipient. If you
> have received this email by mistake, please advise the sender
> immediately and delete the email, including emptying your deleted
> email box. Information included in this email is reserved to named
> addressee’s eyes only. You may not share this message or any of its
> attachments to anyone. Please note that as the recipient, it is your
> responsibility to check the email for malicious software. Motivian
> puts the security of the client at a high priority. Therefore, we
> have put efforts into ensuring that the message is error and
> virus-free. Unfortunately, full security of the email cannot be
> ensured as, despite our efforts, the data included in emails could
> be infected, intercepted, or corrupted. Therefore, the recipient
> should check the email for threats with proper software, as the
> sender does not accept liability for any damage inflicted by viewing
> the content of this email.
> Under the General Data Protection Regulation (GDPR) (EU) 2016/679,
> Motivian as Data Controller has a legal duty to protect any
> information collected from you via email. Information contained in
> this email and any attachments may be privileged or confidential and
> intended for the exclusive use of the original recipient. If you
> have received this email by mistake, please advise the sender
> immediately and delete the email, including emptying your deleted
> email box. Information included in this email is reserved to named
> addressee’s eyes only. You may not share this message or any of its
> attachments to anyone. Please note that as the recipient, it is your
> responsibility to check the email for malicious software. Motivian
> puts the security of the client at a high priority. Therefore, we
> have put efforts into ensuring that the message is error and
> virus-free. Unfortunately, full security of the email cannot be
> ensured as, despite our efforts, the data included in emails could
> be infected, intercepted, or corrupted. Therefore, the recipient
> should check the email for threats with proper software, as the
> sender does not accept liability for any damage inflicted by viewing
> the content of this email.



Under the General Data Protection Regulation (GDPR) (EU) 2016/679, Motivian as Data Controller has a legal duty to protect any information collected from you via email. Information contained in this email and any attachments may be privileged or confidential and intended for the exclusive use of the original recipient. If you have received this email by mistake, please advise the sender immediately and delete the email, including emptying your deleted email box. Information included in this email is reserved to named addressee’s eyes only. You may not share this message or any of its attachments to anyone. Please note that as the recipient, it is your responsibility to check the email for malicious software. Motivian puts the security of the client at a high priority. Therefore, we have put efforts into ensuring that the message is error and virus-free. Unfortunately, full security of the email cannot be ensured as, despite our efforts, the data included in emails could be infected, intercepted, or corrupted. Therefore, the recipient should check the email for threats with proper software, as the sender does not accept liability for any damage inflicted by viewing the content of this email.
Under the General Data Protection Regulation (GDPR) (EU) 2016/679, Motivian as Data Controller has a legal duty to protect any information collected from you via email. Information contained in this email and any attachments may be privileged or confidential and intended for the exclusive use of the original recipient. If you have received this email by mistake, please advise the sender immediately and delete the email, including emptying your deleted email box. Information included in this email is reserved to named addressee’s eyes only. You may not share this message or any of its attachments to anyone. Please note that as the recipient, it is your responsibility to check the email for malicious software. Motivian puts the security of the client at a high priority. Therefore, we have put efforts into ensuring that the message is error and virus-free. Unfortunately, full security of the email cannot be ensured as, despite our efforts, the data included in emails could be infected, intercepted, or corrupted. Therefore, the recipient should check the email for threats with proper software, as the sender does not accept liability for any damage inflicted by viewing the content of this email.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux