Oh boy! Thankfully I upgraded our sandbox cluster so I’m not in a sticky situation right now :-D
Mike Kuriger
Sr. Unix Systems Engineer
From: Sergey Malinin [mailto:hell@xxxxxxxxxxx]
Sent: Friday, June 08, 2018 4:22 PM
To: Michael Kuriger; Paul Emmerich
Cc: ceph-users
Subject: Re: [ceph-users] cannot add new OSDs in mimic
Lack of developers response (I reported the issue on Jun, 4) leads me to believe that it’s not a trivial problem and we all should be getting prepared for a hard time playing with osdmaptool...
On Jun 9, 2018, 02:10 +0300, Paul Emmerich <paul.emmerich@xxxxxxxx>, wrote:
we are also seeing this (I've also posted to the issue tracker). It only affects clusters upgraded from Luminous, not new ones.
Also, it's not about re-using OSDs. Deleting any OSD seems to trigger this bug for all new OSDs on upgraded clusters.
We are still using the pre-Luminous way to remove OSDs, i.e.:
* ceph osd down/stop service
2018-06-08 22:14 GMT+02:00 Michael Kuriger <mk7193@xxxxxxxxx>:
Hi everyone,
I appreciate the suggestions. However, this is still an issue. I've tried adding the OSD using ceph-deploy, and manually from the OSD host. I'm not able to start newly added OSDs at all, even if I use a new ID. It seems the OSD is added to CEPH but I cannot
start it. OSDs that existed prior to the upgrade to mimic are working fine. Here is a copy of an OSD log entry.
osd.58 0 failed to load OSD map for epoch 378084, got 0 bytes
fsid 1ce494ac-a218-4141-9d4f-295e6fa12f2a
last_changed 2018-06-05 15:40:50.179880
created 0.000000
0:
10.3.71.36:6789/0 mon.ceph-mon3
1:
10.3.74.109:6789/0 mon.ceph-mon2
2:
10.3.74.214:6789/0 mon.ceph-mon1
-91> 2018-06-08 12:48:20.697 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 7 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (645793352 0 0) 0x559f7a3dafc0 con 0x559f7994ec00
-90> 2018-06-08 12:48:20.697 7fada058e700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2018-06-08 12:47:50.699337)
-89> 2018-06-08 12:48:20.698 7fadbc9d7140 10 monclient: wait_auth_rotating done
-88> 2018-06-08 12:48:20.698 7fadbc9d7140 10 monclient: _send_command 1 [{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["58"]}]
-87> 2018-06-08 12:48:20.698 7fadbc9d7140 10 monclient: _send_mon_message to mon.ceph-mon3 at
10.3.71.36:6789/0
-86> 2018-06-08 12:48:20.698 7fadbc9d7140 1 --
10.3.56.69:6800/1807239 -->
10.3.71.36:6789/0 -- mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["58"]} v 0) v1 -- 0x559f793e73c0 con 0
-85> 2018-06-08 12:48:20.700 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 8 0x559f793e73c0 mon_command_ack([{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["58"]}]=0 osd.58 already set to
class hdd. set-device-class item id 58 name 'osd.58' device_class 'hdd': no change. v378738) v1
-84> 2018-06-08 12:48:20.701 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 8 ==== mon_command_ack([{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["58"]}]=0 osd.58 already set to class hdd. set-device-class item id 58 name 'osd.58' device_class 'hdd': no change. v378738) v1 ==== 211+0+0 (4063854475
0 0) 0x559f793e73c0 con 0x559f7994ec00
-83> 2018-06-08 12:48:20.701 7fada058e700 10 monclient: handle_mon_command_ack 1 [{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["58"]}]
-82> 2018-06-08 12:48:20.701 7fada058e700 10 monclient: _finish_command 1 = 0 osd.58 already set to class hdd. set-device-class item id 58 name 'osd.58' device_class 'hdd': no change.
-81> 2018-06-08 12:48:20.701 7fadbc9d7140 10 monclient: _send_command 2 [{"prefix": "osd crush create-or-move", "id": 58, "weight":0.5240, "args": ["host=sacephnode12", "root=default"]}]
-80> 2018-06-08 12:48:20.701 7fadbc9d7140 10 monclient: _send_mon_message to mon.ceph-mon3 at
10.3.71.36:6789/0
-79> 2018-06-08 12:48:20.701 7fadbc9d7140 1 --
10.3.56.69:6800/1807239 -->
10.3.71.36:6789/0 -- mon_command({"prefix": "osd crush create-or-move", "id": 58, "weight":0.5240, "args": ["host=sacephnode12", "root=default"]} v 0) v1 -- 0x559f793e7600 con 0
-78> 2018-06-08 12:48:20.703 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 9 0x559f793e7600 mon_command_ack([{"prefix": "osd crush create-or-move", "id": 58, "weight":0.5240, "args": ["host=sacephnode12", "root=default"]}]=0
create-or-move updated item name 'osd.58' weight 0.524 at location {host=sacephnode12,root=default} to crush map v378738) v1
-77> 2018-06-08 12:48:20.703 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 9 ==== mon_command_ack([{"prefix": "osd crush create-or-move", "id": 58, "weight":0.5240, "args": ["host=sacephnode12", "root=default"]}]=0 create-or-move updated item name 'osd.58' weight 0.524 at location {host=sacephnode12,root=default}
to crush map v378738) v1 ==== 258+0+0 (1998484028 0 0) 0x559f793e7600 con 0x559f7994ec00
-76> 2018-06-08 12:48:20.703 7fada058e700 10 monclient: handle_mon_command_ack 2 [{"prefix": "osd crush create-or-move", "id": 58, "weight":0.5240, "args": ["host=sacephnode12", "root=default"]}]
-75> 2018-06-08 12:48:20.703 7fada058e700 10 monclient: _finish_command 2 = 0 create-or-move updated item name 'osd.58' weight 0.524 at location {host=sacephnode12,root=default} to crush map
-74> 2018-06-08 12:48:20.703 7fadbc9d7140 0 osd.58 0 done with init, starting boot process
-73> 2018-06-08 12:48:20.703 7fadbc9d7140 10 monclient: _renew_subs
-72> 2018-06-08 12:48:20.703 7fadbc9d7140 10 monclient: _send_mon_message to mon.ceph-mon3 at
10.3.71.36:6789/0
-71> 2018-06-08 12:48:20.703 7fadbc9d7140 1 --
10.3.56.69:6800/1807239 -->
10.3.71.36:6789/0 -- mon_subscribe({mgrmap=0+,osd_pg_creates=0+}) v3 -- 0x559f79408e00 con 0
-70> 2018-06-08 12:48:20.703 7fadbc9d7140 1 osd.58 0 start_boot
-69> 2018-06-08 12:48:20.703 7fadbc9d7140 10 monclient: get_version osdmap req 0x559f797667a0
-68> 2018-06-08 12:48:20.703 7fadbc9d7140 10 monclient: _send_mon_message to mon.ceph-mon3 at
10.3.71.36:6789/0
-67> 2018-06-08 12:48:20.703 7fadbc9d7140 1 --
10.3.56.69:6800/1807239 -->
10.3.71.36:6789/0 -- mon_get_version(what=osdmap handle=1) v1 -- 0x559f79434b40 con 0
-66> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command status hook 0x559f793f0700
-65> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command flush_journal hook 0x559f793f0700
-64> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_ops_in_flight hook 0x559f793f0700
-63> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command ops hook 0x559f793f0700
-62> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_blocked_ops hook 0x559f793f0700
-61> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_historic_ops hook 0x559f793f0700
-60> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_historic_slow_ops hook 0x559f793f0700
-59> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_historic_ops_by_duration hook 0x559f793f0700
-58> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_op_pq_state hook 0x559f793f0700
-57> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_blacklist hook 0x559f793f0700
-56> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_watchers hook 0x559f793f0700
-55> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_reservations hook 0x559f793f0700
-54> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command get_latest_osdmap hook 0x559f793f0700
-53> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command heap hook 0x559f793f0700
-52> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command set_heap_property hook 0x559f793f0700
-51> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command get_heap_property hook 0x559f793f0700
-50> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_objectstore_kv_stats hook 0x559f793f0700
-49> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_scrubs hook 0x559f793f0700
-48> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command calc_objectstore_db_histogram hook 0x559f793f0700
-47> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command flush_store_cache hook 0x559f793f0700
-46> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command dump_pgstate_history hook 0x559f793f0700
-45> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command compact hook 0x559f793f0700
-44> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command get_mapped_pools hook 0x559f793f0700
-43> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command smart hook 0x559f793f0700
-42> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command list_devices hook 0x559f793f0700
-41> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command setomapval hook 0x559f79767280
-40> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command rmomapkey hook 0x559f79767280
-39> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command setomapheader hook 0x559f79767280
-38> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command getomap hook 0x559f79767280
-37> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command truncobj hook 0x559f79767280
-36> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command injectdataerr hook 0x559f79767280
-35> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command injectmdataerr hook 0x559f79767280
-34> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command set_recovery_delay hook 0x559f79767280
-33> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command trigger_scrub hook 0x559f79767280
-32> 2018-06-08 12:48:20.703 7fadbc9d7140 5 asok(0x559f794345a0) register_command injectfull hook 0x559f79767280
-31> 2018-06-08 12:48:20.704 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 10 0x559f7958f8c0 mgrmap(e 201) v1
-30> 2018-06-08 12:48:20.704 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 11 0x559f79434b40 mon_get_version_reply(handle=1 version=378738) v2
-29> 2018-06-08 12:48:20.704 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 10 ==== mgrmap(e 201) v1 ==== 1776+0+0 (412200892 0 0) 0x559f7958f8c0 con 0x559f7994ec00
-28> 2018-06-08 12:48:20.704 7fada058e700 4 mgrc handle_mgr_map Got map version 201
-27> 2018-06-08 12:48:20.704 7fada058e700 4 mgrc handle_mgr_map Active mgr is now
10.3.74.109:6801/1015
-26> 2018-06-08 12:48:20.704 7fada058e700 4 mgrc reconnect Starting new session with
10.3.74.109:6801/1015
-25> 2018-06-08 12:48:20.706 7fadac2a5700 2 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=1)._process_connection got newly_acked_seq 0 vs out_seq 0
-24> 2018-06-08 12:48:20.706 7fada058e700 1 --
10.3.56.69:6800/1807239 -->
10.3.74.109:6801/1015 -- mgropen(unknown.58) v3 -- 0x559f79a9c000 con 0
-23> 2018-06-08 12:48:20.706 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 11 ==== mon_get_version_reply(handle=1 version=378738) v2 ==== 24+0+0 (2329122009 0 0) 0x559f79434b40 con 0x559f7994ec00
-22> 2018-06-08 12:48:20.706 7fada058e700 10 monclient: handle_get_version_reply finishing 0x559f797667a0 version 378738
-21> 2018-06-08 12:48:20.706 7fad96a13700 5 osd.58 0 heartbeat: osd_stat(1.0 GiB used, 536 GiB avail, 537 GiB total, peers [] op hist [])
-20> 2018-06-08 12:48:20.706 7fad96a13700 -1 osd.58 0 waiting for initial osdmap
-19> 2018-06-08 12:48:20.706 7fad96a13700 10 monclient: _renew_subs
-18> 2018-06-08 12:48:20.706 7fad96a13700 10 monclient: _send_mon_message to mon.ceph-mon3 at
10.3.71.36:6789/0
-17> 2018-06-08 12:48:20.706 7fad96a13700 1 --
10.3.56.69:6800/1807239 -->
10.3.71.36:6789/0 -- mon_subscribe({osdmap=378084}) v3 -- 0x559f7a3b8400 con 0
-16> 2018-06-08 12:48:20.707 7fadac2a5700 5 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=201245 cs=1 l=1). rx mgr.44007797 seq 1 0x559f79435860 mgrconfigure(period=5, threshold=5) v2
-15> 2018-06-08 12:48:20.708 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mgr.44007797
10.3.74.109:6801/1015 1 ==== mgrconfigure(period=5, threshold=5) v2 ==== 8+0+0 (3460719617 0 0) 0x559f79435860 con 0x559f79950a00
-14> 2018-06-08 12:48:20.708 7fada058e700 4 mgrc handle_mgr_configure stats_period=5
-13> 2018-06-08 12:48:20.708 7fada058e700 4 mgrc handle_mgr_configure updated stats threshold: 5
-12> 2018-06-08 12:48:20.708 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 12 0x559f79aba000 osd_map(378085..378085 src has 378085..378738 +gap_removed_snaps) v4
-11> 2018-06-08 12:48:20.708 7fada058e700 1 --
10.3.56.69:6800/1807239 -->
10.3.74.109:6801/1015 -- mgrreport(unknown.58 +54-0 packed 742) v6 -- 0x559f79a9c300 con 0
-10> 2018-06-08 12:48:20.708 7fada058e700 1 --
10.3.56.69:6800/1807239 -->
10.3.74.109:6801/1015 -- pg_stats(0 pgs tid 0 v 0) v1 -- 0x559f7958f600 con 0
-9> 2018-06-08 12:48:20.708 7fada058e700 1 --
10.3.56.69:6800/1807239 <== mon.0
10.3.71.36:6789/0 12 ==== osd_map(378085..378085 src has 378085..378738 +gap_removed_snaps) v4 ==== 33348+0+0 (2799879432 0 0) 0x559f79aba000 con 0x559f7994ec00
-8> 2018-06-08 12:48:20.708 7fada058e700 3 osd.58 0 handle_osd_map epochs [378085,378085], i have 0, src has [378085,378738]
-7> 2018-06-08 12:48:20.709 7fadabaa4700 5 --
10.3.56.69:6800/1807239 >>
10.3.71.36:6789/0 conn(0x559f7994ec00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=25741 cs=1 l=1). rx mon.0 seq 13 0x559f79abaa00 osd_map(378086..378125 src has 378085..378738) v4
-6> 2018-06-08 12:48:20.709 7fada058e700 -1 osd.58 0 failed to load OSD map for epoch 378084, got 0 bytes
-5> 2018-06-08 12:48:20.710 7fadac2a5700 1 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN pgs=201245 cs=1 l=1).read_bulk peer close file descriptor 38
-4> 2018-06-08 12:48:20.710 7fadac2a5700 1 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN pgs=201245 cs=1 l=1).read_until read failed
-3> 2018-06-08 12:48:20.710 7fadac2a5700 1 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN pgs=201245 cs=1 l=1).process read tag failed
-2> 2018-06-08 12:48:20.710 7fadac2a5700 1 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN pgs=201245 cs=1 l=1).fault on lossy channel, failing
-1> 2018-06-08 12:48:20.710 7fadac2a5700 2 --
10.3.56.69:6800/1807239 >>
10.3.74.109:6801/1015 conn(0x559f79950a00 :-1 s=STATE_OPEN pgs=201245 cs=1 l=1)._stop
0> 2018-06-08 12:48:20.711 7fada058e700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.0/rpm/el7/BUILD/ceph-13.2.0/src/osd/OSD.h: In function 'OSDMapRef
OSDService::get_map(epoch_t)' thread 7fada058e700 time 2018-06-08 12:48:20.710675
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.0/rpm/el7/BUILD/ceph-13.2.0/src/osd/OSD.h: 828: FAILED assert(ret)
ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xff) [0x7fadb3e1753f]
2: (()+0x286727) [0x7fadb3e17727]
3: (OSDService::get_map(unsigned int)+0x4a) [0x559f76fe4dda]
4: (OSD::handle_osd_map(MOSDMap*)+0x1020) [0x559f76f921f0]
5: (OSD::_dispatch(Message*)+0xa1) [0x559f76f94d21]
6: (OSD::ms_dispatch(Message*)+0x56) [0x559f76f95066]
7: (DispatchQueue::entry()+0xb5a) [0x7fadb3e8d74a]
8: (DispatchQueue::DispatchThread::entry()+0xd) [0x7fadb3f2df2d]
9: (()+0x7e25) [0x7fadb0afde25]
10: (clone()+0x6d) [0x7fadafbf134d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_mirror
0/ 5 rbd_replay
0/ 5 journaler
0/ 5 objectcacher
0/ 5 client
1/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 1 reserver
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 rgw_sync
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
1/ 5 compressor
1/ 5 bluestore
1/ 5 bluefs
1/ 3 bdev
1/ 5 kstore
4/ 5 rocksdb
4/ 5 leveldb
4/ 5 memdb
1/ 5 kinetic
1/ 5 fuse
1/ 5 mgr
1/ 5 mgrc
1/ 5 dpdk
1/ 5 eventtrace
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 10000
max_new 1000
log_file /var/log/ceph/ceph-osd.58.log
--- end dump of recent events ---
2018-06-08 12:48:20.717 7fada058e700 -1 *** Caught signal (Aborted) **
in thread 7fada058e700 thread_name:ms_dispatch
ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable)
1: (()+0x8e1870) [0x559f774af870]
2: (()+0xf5e0) [0x7fadb0b055e0]
3: (gsignal()+0x37) [0x7fadafb2e1f7]
4: (abort()+0x148) [0x7fadafb2f8e8]
5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x25d) [0x7fadb3e1769d]
6: (()+0x286727) [0x7fadb3e17727]
7: (OSDService::get_map(unsigned int)+0x4a) [0x559f76fe4dda]
8: (OSD::handle_osd_map(MOSDMap*)+0x1020) [0x559f76f921f0]
9: (OSD::_dispatch(Message*)+0xa1) [0x559f76f94d21]
10: (OSD::ms_dispatch(Message*)+0x56) [0x559f76f95066]
11: (DispatchQueue::entry()+0xb5a) [0x7fadb3e8d74a]
12: (DispatchQueue::DispatchThread::entry()+0xd) [0x7fadb3f2df2d]
13: (()+0x7e25) [0x7fadb0afde25]
14: (clone()+0x6d) [0x7fadafbf134d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- begin dump of recent events ---
0> 2018-06-08 12:48:20.717 7fada058e700 -1 *** Caught signal (Aborted) **
in thread 7fada058e700 thread_name:ms_dispatch
ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable)
1: (()+0x8e1870) [0x559f774af870]
2: (()+0xf5e0) [0x7fadb0b055e0]
3: (gsignal()+0x37) [0x7fadafb2e1f7]
4: (abort()+0x148) [0x7fadafb2f8e8]
5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x25d) [0x7fadb3e1769d]
6: (()+0x286727) [0x7fadb3e17727]
7: (OSDService::get_map(unsigned int)+0x4a) [0x559f76fe4dda]
8: (OSD::handle_osd_map(MOSDMap*)+0x1020) [0x559f76f921f0]
9: (OSD::_dispatch(Message*)+0xa1) [0x559f76f94d21]
10: (OSD::ms_dispatch(Message*)+0x56) [0x559f76f95066]
11: (DispatchQueue::entry()+0xb5a) [0x7fadb3e8d74a]
12: (DispatchQueue::DispatchThread::entry()+0xd) [0x7fadb3f2df2d]
13: (()+0x7e25) [0x7fadb0afde25]
14: (clone()+0x6d) [0x7fadafbf134d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Mike Kuriger
--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|