Re: Monitor persistently out-of-quorum

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/28 17:26, Ki Wong wrote:
> Hello,
> 
> I am at my wit's end.
> 
> So I made a mistake in the configuration of my router and one
> of the monitors (out of 3) dropped out of the quorum and nothing
> I’ve done allow it to rejoin. That includes reinstalling the
> monitor with ceph-ansible.
> 
> The connectivity issue is fixed. I’ve tested it using “nc” and
> the host can connect to both port 3300 and 6789 of the other
> monitors. But the wayward monitor continue to stay out of quorum.

Just to make sure, have you tried from mon1->mon3, mon2->mon3, mon3->mon1 and
mon3->mon2?

> 
> What is wrong? I see a bunch of “EBUSY” errors in the log, with
> the message:
> 
>   e1 handle_auth_request haven't formed initial quorum, EBUSY
> 
> How do I fix this? Any help would be greatly appreciated.
> 
> Many thanks,
> 
> -kc
> 
> 
> With debug_mon at 1/10, I got these log snippets:
> 
> 2020-10-28 15:40:05.961 7fb79253a700  4 mon.mgmt03@2(probing) e1 probe_timeout 0x564050353ec0
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 bootstrap
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 sync_reset_requester
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 unregister_cluster_logger - not registered
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 monmap e1: 3 mons at {mgmt01=[v2:10.0.1.1:3300/0,v1:10.0.1.1:6789/0],mgmt02=[v2:10.1.1.1:3300/0,v1:10.1.1.1:6789/0],mgmt03=[v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]}
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 _reset
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing).auth v0 _set_mon_num_rank num 0 rank 0
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 timecheck_finish
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_event_cancel
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_reset
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 reset_probe_timeout 0x564050347ce0 after 2 seconds
> 2020-10-28 15:40:05.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 probing other monitors
> 2020-10-28 15:40:07.961 7fb79253a700  4 mon.mgmt03@2(probing) e1 probe_timeout 0x564050347ce0
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 bootstrap
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 sync_reset_requester
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 unregister_cluster_logger - not registered
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 monmap e1: 3 mons at {mgmt01=[v2:10.0.1.1:3300/0,v1:10.0.1.1:6789/0],mgmt02=[v2:10.1.1.1:3300/0,v1:10.1.1.1:6789/0],mgmt03=[v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]}
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 _reset
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing).auth v0 _set_mon_num_rank num 0 rank 0
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 timecheck_finish
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_event_cancel
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_reset
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 reset_probe_timeout 0x564050360660 after 2 seconds
> 2020-10-28 15:40:07.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 probing other monitors
> 2020-10-28 15:40:09.107 7fb79253a700 -1 mon.mgmt03@2(probing) e1 get_health_metrics reporting 7 slow ops, oldest is log(1 entries from seq 1 at 2020-10-27 23:03:41.586915)
> 2020-10-28 15:40:09.961 7fb79253a700  4 mon.mgmt03@2(probing) e1 probe_timeout 0x564050360660
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 bootstrap
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 sync_reset_requester
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 unregister_cluster_logger - not registered
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 monmap e1: 3 mons at {mgmt01=[v2:10.0.1.1:3300/0,v1:10.0.1.1:6789/0],mgmt02=[v2:10.1.1.1:3300/0,v1:10.1.1.1:6789/0],mgmt03=[v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]}
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 _reset
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing).auth v0 _set_mon_num_rank num 0 rank 0
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 timecheck_finish
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_event_cancel
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_reset
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 reset_probe_timeout 0x5640503606c0 after 2 seconds
> 2020-10-28 15:40:09.961 7fb79253a700 10 mon.mgmt03@2(probing) e1 probing other monitors
> 2020-10-28 15:40:11.962 7fb79253a700  4 mon.mgmt03@2(probing) e1 probe_timeout 0x5640503606c0
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 bootstrap
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 sync_reset_requester
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 unregister_cluster_logger - not registered
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 monmap e1: 3 mons at {mgmt01=[v2:10.0.1.1:3300/0,v1:10.0.1.1:6789/0],mgmt02=[v2:10.1.1.1:3300/0,v1:10.1.1.1:6789/0],mgmt03=[v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]}
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 _reset
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing).auth v0 _set_mon_num_rank num 0 rank 0
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 timecheck_finish
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_event_cancel
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 scrub_reset
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 cancel_probe_timeout (none scheduled)
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 reset_probe_timeout 0x564050360900 after 2 seconds
> 2020-10-28 15:40:11.962 7fb79253a700 10 mon.mgmt03@2(probing) e1 probing other monitors
> 2020-10-28 15:40:12.354 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request con 0x56404cf25400 (start) method 2 payload 32
> 2020-10-28 15:40:12.354 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request haven't formed initial quorum, EBUSY
> 2020-10-28 15:40:12.354 7fb78fd35700 10 mon.mgmt03@2(probing) e1 ms_handle_reset 0x56404cf25400 -
> 
>
> 
> 2020-10-28 15:40:59.968 7fb79253a700 10 mon.mgmt03@2(probing) e1 probing other monitors
> 2020-10-28 15:41:00.110 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request con 0x56404cae1000 (start) method 2 payload 22
> 2020-10-28 15:41:00.110 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request haven't formed initial quorum, EBUSY
> 2020-10-28 15:41:00.110 7fb78fd35700 10 mon.mgmt03@2(probing) e1 ms_handle_reset 0x56404cae1000 -
> 2020-10-28 15:41:00.110 7fb793d3d700 10 mon.mgmt03@2(probing) e1 handle_auth_request con 0x56404c8d4c00 (start) method 2 payload 22
> 2020-10-28 15:41:00.110 7fb793d3d700 10 mon.mgmt03@2(probing) e1 handle_auth_request haven't formed initial quorum, EBUSY
> 2020-10-28 15:41:00.110 7fb78fd35700 10 mon.mgmt03@2(probing) e1 ms_handle_reset 0x56404c8d4c00 -
> 2020-10-28 15:41:00.117 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request con 0x56404c630800 (start) method 2 payload 22
> 2020-10-28 15:41:00.117 7fb79453e700 10 mon.mgmt03@2(probing) e1 handle_auth_request haven't formed initial quorum, EBUSY
> 2020-10-28 15:41:00.117 7fb78fd35700 10 mon.mgmt03@2(probing) e1 ms_handle_reset 0x56404c630800 -
> 
>
> 
> 2020-10-28 15:42:42.379 7fb78d530700  4 rocksdb: [db/db_impl.cc:777] ------- DUMPING STATS -------
> 2020-10-28 15:42:42.379 7fb78d530700  4 rocksdb: [db/db_impl.cc:778]
> ** DB Stats **
> Uptime(secs): 60000.0 total, 600.0 interval
> Cumulative writes: 6 writes, 7 keys, 6 commit groups, 0.9 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
> Cumulative WAL: 6 writes, 6 syncs, 0.86 writes per sync, written: 0.00 GB, 0.00 MB/s
> Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
> Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
> Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
> Interval stall: 00:00:0.000 H:M:S, 0.0 percent
> 
> ** Compaction Stats [default] **
> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   L0      2/0    3.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0
>  Sum      2/0    3.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0
>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
> 
> ** Compaction Stats [default] **
> Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0
> Uptime(secs): 60000.0 total, 600.0 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
> 
>
> 
> 2020-10-28 17:17:11.781 7f5f694821c0  0 using public_addr v2:10.2.1.1:0/0 -> [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]
> 2020-10-28 17:17:11.781 7f5f694821c0  0 starting mon.mgmt03 rank -1 at public addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] at bind addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] mon_data /var/lib/ceph/mon/ceph-mgmt03 fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:11.783 7f5f694821c0  1 mon.mgmt03@-1(???) e2 preinit fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:11.783 7f5f694821c0  1 mon.mgmt03@-1(???) e2  initial_members mgmt01,mgmt02,mgmt03, filtering seed monmap
> 2020-10-28 17:17:11.783 7f5f694821c0  1 mon.mgmt03@-1(???) e2 preinit clean up potentially inconsistent store state
> 2020-10-28 17:17:11.785 7f5f694821c0  0 -- [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] send_to message mon_probe(probe 374aed9e-5fc1-47e1-8d29-4416f7425e76 name mgmt03 new mon_release 14) v7 with empty dest
> 2020-10-28 17:17:13.191 7f5f5170d700  0 mon.mgmt03@-1(probing) e3  monmap addrs for rank 2 changed, i am [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0], monmap is v2:10.2.1.1:3300/0, respawning
> 2020-10-28 17:17:13.191 7f5f5170d700 -1 mon.mgmt03@-1(probing) e3  stashing newest monmap 3 for next startup
> 2020-10-28 17:17:13.191 7f5f5170d700  0 mon.mgmt03@-1(probing) e3 respawn
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  e: '/usr/bin/ceph-mon'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  0: '/usr/bin/ceph-mon'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  1: '-f'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  2: '--cluster'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  3: 'ceph'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  4: '--id'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  5: 'mgmt03'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  6: '--setuser'
> 2020-10-28 17:17:13.191 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  7: 'ceph'
> 2020-10-28 17:17:13.192 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  8: '--setgroup'
> 2020-10-28 17:17:13.192 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  9: 'ceph'
> 2020-10-28 17:17:13.192 7f5f5170d700  1 mon.mgmt03@-1(probing) e3 respawning with exe /usr/bin/ceph-mon
> 2020-10-28 17:17:13.192 7f5f5170d700  1 mon.mgmt03@-1(probing) e3  exe_path /proc/self/exe
> 2020-10-28 17:17:13.217 7eff1f7cd1c0  0 ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable), process ceph-mon, pid 24265
> 2020-10-28 17:17:13.217 7eff1f7cd1c0  0 pidfile_write: ignore empty --pid-file
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0 load: jerasure load: lrc load: isa
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option compression = kNoCompression
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option level_compaction_dynamic_level_bytes = true
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option write_buffer_size = 33554432
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option compression = kNoCompression
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option level_compaction_dynamic_level_bytes = true
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  0  set rocksdb option write_buffer_size = 33554432
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  1 rocksdb: do_open column families: [default]
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  4 rocksdb: RocksDB version: 6.1.2
> 
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  4 rocksdb: Compile date Aug 13 2020
> 2020-10-28 17:17:13.246 7eff1f7cd1c0  4 rocksdb: DB SUMMARY
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: CURRENT file:  CURRENT
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: IDENTITY file:  IDENTITY
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: MANIFEST file:  MANIFEST-009877 size: 194 Bytes
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-mgmt03/store.db dir, Total Num: 2, files: 009874.sst 009876.sst
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-mgmt03/store.db: 009878.log size: 451 ;
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                         Options.error_if_exists: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.create_if_missing: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                         Options.paranoid_checks: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                                     Options.env: 0x55beb27a5780
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                                Options.info_log: 0x55beb3d8e300
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.max_file_opening_threads: 16
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                              Options.statistics: (nil)
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                               Options.use_fsync: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.max_log_file_size: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.log_file_time_to_roll: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.keep_log_file_num: 1000
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                    Options.recycle_log_file_num: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                         Options.allow_fallocate: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                        Options.allow_mmap_reads: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.allow_mmap_writes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                        Options.use_direct_reads: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:          Options.create_missing_column_families: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                              Options.db_log_dir:
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-mgmt03/store.db
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.table_cache_numshardbits: 6
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                      Options.max_subcompactions: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.max_background_flushes: -1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                         Options.WAL_ttl_seconds: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.WAL_size_limit_MB: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.manifest_preallocation_size: 4194304
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                     Options.is_fd_close_on_exec: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.advise_random_on_open: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                    Options.db_write_buffer_size: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                    Options.write_buffer_manager: 0x55beb3d98b40
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.access_hint_on_compaction_start: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                      Options.use_adaptive_mutex: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                            Options.rate_limiter: (nil)
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                       Options.wal_recovery_mode: 2
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.enable_thread_tracking: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.enable_pipelined_write: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.write_thread_max_yield_usec: 100
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                               Options.row_cache: None
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                              Options.wal_filter: None
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.avoid_flush_during_recovery: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.allow_ingest_behind: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.preserve_deletes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.two_write_queues: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.manual_wal_flush: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.atomic_flush: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.max_background_jobs: 2
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.max_background_compactions: -1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.delayed_write_rate : 16777216
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.max_total_wal_size: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.stats_dump_period_sec: 600
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                 Options.stats_persist_period_sec: 600
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                          Options.max_open_files: -1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                          Options.bytes_per_sync: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                      Options.wal_bytes_per_sync: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:       Options.compaction_readahead_size: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Compression algorithms supported:
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kZSTDNotFinalCompression supported: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kZSTD supported: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kXpressCompression supported: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kLZ4HCCompression supported: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kLZ4Compression supported: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kBZip2Compression supported: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kZlibCompression supported: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        kSnappyCompression supported: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Fast CRC32 supported: Supported on x86
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: [db/version_set.cc:3543] Recovering from manifest file: /var/lib/ceph/mon/ceph-mgmt03/store.db/MANIFEST-009877
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: [db/column_family.cc:477] --------------- Options for column family [default]:
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:           Options.merge_operator:
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        Options.compaction_filter: None
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        Options.compaction_filter_factory: None
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.memtable_factory: SkipListFactory
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:            Options.table_factory: BlockBasedTable
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55beb30daac0)
>   cache_index_and_filter_blocks: 1
>   cache_index_and_filter_blocks_with_high_priority: 1
>   pin_l0_filter_and_index_blocks_in_cache: 0
>   pin_top_level_index_and_filter: 1
>   index_type: 0
>   data_block_index_type: 0
>   data_block_hash_table_util_ratio: 0.750000
>   hash_index_allow_collision: 1
>   checksum: 1
>   no_block_cache: 0
>   block_cache: 0x55beb30f7010
>   block_cache_name: BinnedLRUCache
>   block_cache_options:
>     capacity : 536870912
>     num_shard_bits : 4
>     strict_capacity_limit : 0
>     high_pri_pool_ratio: 0.000
>   block_cache_compressed: (nil)
>   persistent_cache: (nil)
>   block_size: 4096
>   block_size_deviation: 10
>   block_restart_interval: 16
>   index_block_restart_interval: 1
>   metadata_block_size: 4096
>   partition_filters: 0
>   use_delta_encoding: 1
>   filter_policy: rocksdb.BuiltinBloomFilter
>   whole_key_filtering: 1
>   verify_compression: 0
>   read_amp_bytes_per_bit: 0
>   format_version: 2
>   enable_index_compression: 1
>   block_align: 0
> 
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        Options.write_buffer_size: 33554432
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:  Options.max_write_buffer_number: 2
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:          Options.compression: NoCompression
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.bottommost_compression: Disabled
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:       Options.prefix_extractor: nullptr
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.num_levels: 7
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:            Options.compression_opts.window_bits: -14
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.compression_opts.level: 32767
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:               Options.compression_opts.strategy: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                  Options.compression_opts.enabled: false
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:              Options.level0_stop_writes_trigger: 36
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.target_file_size_base: 67108864
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:             Options.target_file_size_multiplier: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                        Options.arena_block_size: 4194304
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.disable_auto_compactions: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.table_properties_collectors:
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                   Options.inplace_update_support: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                 Options.inplace_update_num_locks: 10000
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:               Options.memtable_whole_key_filtering: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:   Options.memtable_huge_page_size: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                           Options.bloom_locality: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                    Options.max_successive_merges: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.optimize_filters_for_hits: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.paranoid_file_checks: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.force_consistency_checks: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                Options.report_bg_io_stats: 0
> 2020-10-28 17:17:13.247 7eff1f7cd1c0  4 rocksdb:                               Options.ttl: 0
> 2020-10-28 17:17:13.248 7eff1f7cd1c0  3 rocksdb: [db/version_set.cc:2581] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
> 2020-10-28 17:17:13.248 7eff1f7cd1c0  4 rocksdb: [db/version_set.cc:3757] Recovered from manifest file:/var/lib/ceph/mon/ceph-mgmt03/store.db/MANIFEST-009877 succeeded,manifest_file_number is 9877, next_file_number is 9879, last_sequence is 2479, log_number is 9872,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
> 
> 2020-10-28 17:17:13.248 7eff1f7cd1c0  4 rocksdb: [db/version_set.cc:3766] Column family [default] (ID 0), log number is 9872
> 
> 2020-10-28 17:17:13.248 7eff1f7cd1c0  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1603930633249084, "job": 1, "event": "recovery_started", "log_files": [9878]}
> 2020-10-28 17:17:13.248 7eff1f7cd1c0  4 rocksdb: [db/db_impl_open.cc:583] Recovering log #9878 mode 2
> 2020-10-28 17:17:13.249 7eff1f7cd1c0  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1603930633250074, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 9879, "file_size": 1336, "table_properties": {"data_size": 434, "index_size": 28, "filter_size": 69, "raw_key_size": 34, "raw_average_key_size": 34, "raw_value_size": 383, "raw_average_value_size": 383, "num_data_blocks": 1, "num_entries": 1, "filter_policy_name": "rocksdb.BuiltinBloomFilter"}}
> 2020-10-28 17:17:13.249 7eff1f7cd1c0  4 rocksdb: [db/version_set.cc:3036] Creating manifest 9880
> 
> 2020-10-28 17:17:13.249 7eff1f7cd1c0  3 rocksdb: [db/version_set.cc:2581] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
> 2020-10-28 17:17:13.250 7eff1f7cd1c0  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1603930633251509, "job": 1, "event": "recovery_finished"}
> 2020-10-28 17:17:13.253 7eff1f7cd1c0  4 rocksdb: DB pointer 0x55beb3d26400
> 2020-10-28 17:17:13.253 7eff05253700  4 rocksdb: [db/db_impl.cc:777] ------- DUMPING STATS -------
> 2020-10-28 17:17:13.253 7eff05253700  4 rocksdb: [db/db_impl.cc:778]
> ** DB Stats **
> Uptime(secs): 0.0 total, 0.0 interval
> Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
> Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
> Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
> Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
> Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
> Interval stall: 00:00:0.000 H:M:S, 0.0 percent
> 
> ** Compaction Stats [default] **
> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   L0      2/0    2.61 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0
>   L6      1/0    2.39 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
>  Sum      3/0    5.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0
>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0
> 
> ** Compaction Stats [default] **
> Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0
> Uptime(secs): 0.0 total, 0.0 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.22 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.22 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
> 
> ** File Read Latency Histogram By Level [default] **
> 
> ** Compaction Stats [default] **
> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   L0      2/0    2.61 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0
>   L6      1/0    2.39 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
>  Sum      3/0    5.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0
>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
> 
> ** Compaction Stats [default] **
> Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0
> Uptime(secs): 0.0 total, 0.0 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
> 
> ** File Read Latency Histogram By Level [default] **
> 
> 2020-10-28 17:17:13.253 7eff1f7cd1c0  0 mon.mgmt03 does not exist in monmap, will attempt to join an existing cluster
> 2020-10-28 17:17:13.254 7eff1f7cd1c0  0 using public_addr v2:10.2.1.1:0/0 -> [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0]
> 2020-10-28 17:17:13.254 7eff1f7cd1c0  0 starting mon.mgmt03 rank -1 at public addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] at bind addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] mon_data /var/lib/ceph/mon/ceph-mgmt03 fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:13.256 7eff1f7cd1c0  1 mon.mgmt03@-1(???) e2 preinit fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76
> 2020-10-28 17:17:13.256 7eff1f7cd1c0  1 mon.mgmt03@-1(???) e2  initial_members mgmt01,mgmt02,mgmt03, filtering seed monmap
> 2020-10-28 17:17:13.256 7eff1f7cd1c0  1 mon.mgmt03@-1(???) e2 preinit clean up potentially inconsistent store state
> 2020-10-28 17:17:13.258 7eff1f7cd1c0  0 -- [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] send_to message mon_probe(probe 374aed9e-5fc1-47e1-8d29-4416f7425e76 name mgmt03 new mon_release 14) v7 with empty dest
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

-- 
David Caro
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux