Hello please can you help me with that. I'm on ubuntu 18.04 bionic trying to install ceph version 12.2.12 with ceph-deploy to 3 nodes on odroid xu4 (armhf). Creating osd with lvm volumes seem's to work but after starting manager by the command "ceph-deploy mgr create node1", mgr go to starting mode then stay in a health warn state due to no mgr demons active. regards --$ ceph -s --------------------------------------------------------------------- cluster: id: 84f324c2-ac27-4f72-bcb0-7ff1355ee97e health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum node1,node3,node2 mgr: no daemons active osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 0B used, 0B / 0B avail pgs: ---/var/log/ceph/ceph-mgr.node1.log------------------------------------------------------------- 2019-11-14 11:08:45.794236 b6f87230 0 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable), process ceph-mgr, pid 7879 2019-11-14 11:08:45.799472 b6f87230 0 pidfile_write: ignore empty --pid-file 2019-11-14 11:08:45.839575 b6f87230 1 mgr send_beacon standby 2019-11-14 11:08:45.866590 b06c9c30 -1 *** Caught signal (Segmentation fault) ** in thread b06c9c30 thread_name:ms_dispatch ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable) 1: (()+0x302eac) [0x7d6eac] 2: (()+0x25750) [0xb6862750] 3: (_ULarm_step()+0x5b) [0xb67eecec] 4: (()+0x255e8) [0xb6cb05e8] 5: (GetStackTrace(void**, int, int)+0x25) [0xb6cb0a3e] 6: (tcmalloc::PageHeap::GrowHeap(unsigned int)+0xb9) [0xb6ca536a] 7: (tcmalloc::PageHeap::New(unsigned int)+0x79) [0xb6ca55e6] 8: (tcmalloc::CentralFreeList::Populate()+0x71) [0xb6ca45ce] 9: (tcmalloc::CentralFreeList::FetchFromOneSpansSafe(int, void**, void**)+0x1b) [0xb6ca4760] 10: (tcmalloc::CentralFreeList::RemoveRange(void**, void**, int)+0x6d) [0xb6ca47de] 11: (tcmalloc::ThreadCache::FetchFromCentralCache(unsigned int, unsigned int)+0x51) [0xb6ca6a56] 12: (malloc()+0x22d) [0xb6cb1a8e] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. --- begin dump of recent events --- -108> 2019-11-14 11:08:45.776013 b6f87230 5 asok(0x5625320) register_command perfcounters_dump hook 0x55bc090 -107> 2019-11-14 11:08:45.776070 b6f87230 5 asok(0x5625320) register_command 1 hook 0x55bc090 -106> 2019-11-14 11:08:45.776096 b6f87230 5 asok(0x5625320) register_command perf dump hook 0x55bc090 -105> 2019-11-14 11:08:45.776134 b6f87230 5 asok(0x5625320) register_command perfcounters_schema hook 0x55bc090 -104> 2019-11-14 11:08:45.776155 b6f87230 5 asok(0x5625320) register_command perf histogram dump hook 0x55bc090 -103> 2019-11-14 11:08:45.776194 b6f87230 5 asok(0x5625320) register_command 2 hook 0x55bc090 -102> 2019-11-14 11:08:45.776244 b6f87230 5 asok(0x5625320) register_command perf schema hook 0x55bc090 -101> 2019-11-14 11:08:45.776267 b6f87230 5 asok(0x5625320) register_command perf histogram schema hook 0x55bc090 -100> 2019-11-14 11:08:45.776304 b6f87230 5 asok(0x5625320) register_command perf reset hook 0x55bc090 -99> 2019-11-14 11:08:45.776325 b6f87230 5 asok(0x5625320) register_command config show hook 0x55bc090 -98> 2019-11-14 11:08:45.776358 b6f87230 5 asok(0x5625320) register_command config help hook 0x55bc090 -97> 2019-11-14 11:08:45.776380 b6f87230 5 asok(0x5625320) register_command config set hook 0x55bc090 -96> 2019-11-14 11:08:45.776416 b6f87230 5 asok(0x5625320) register_command config get hook 0x55bc090 -95> 2019-11-14 11:08:45.776457 b6f87230 5 asok(0x5625320) register_command config diff hook 0x55bc090 -94> 2019-11-14 11:08:45.776479 b6f87230 5 asok(0x5625320) register_command config diff get hook 0x55bc090 -93> 2019-11-14 11:08:45.776513 b6f87230 5 asok(0x5625320) register_command log flush hook 0x55bc090 -92> 2019-11-14 11:08:45.776534 b6f87230 5 asok(0x5625320) register_command log dump hook 0x55bc090 -91> 2019-11-14 11:08:45.776568 b6f87230 5 asok(0x5625320) register_command log reopen hook 0x55bc090 -90> 2019-11-14 11:08:45.776679 b6f87230 5 asok(0x5625320) register_command dump_mempools hook 0x5757b04 -89> 2019-11-14 11:08:45.794236 b6f87230 0 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable), process ceph-mgr, pid 7879 -88> 2019-11-14 11:08:45.799472 b6f87230 0 pidfile_write: ignore empty --pid-file -87> 2019-11-14 11:08:45.800376 b6f87230 1 finished global_init_daemonize -86> 2019-11-14 11:08:45.813287 b6f87230 5 asok(0x5625320) init /var/run/ceph/ceph-mgr.node1.asok -85> 2019-11-14 11:08:45.813359 b6f87230 5 asok(0x5625320) bind_and_listen /var/run/ceph/ceph-mgr.node1.asok -84> 2019-11-14 11:08:45.813867 b6f87230 5 asok(0x5625320) register_command 0 hook 0x55bc180 -83> 2019-11-14 11:08:45.813920 b6f87230 5 asok(0x5625320) register_command version hook 0x55bc180 -82> 2019-11-14 11:08:45.813954 b6f87230 5 asok(0x5625320) register_command git_version hook 0x55bc180 -81> 2019-11-14 11:08:45.813994 b6f87230 5 asok(0x5625320) register_command help hook 0x55bc178 -80> 2019-11-14 11:08:45.814085 b6f87230 5 asok(0x5625320) register_command get_command_descriptions hook 0x55bc170 -79> 2019-11-14 11:08:45.814371 b3ed0c30 5 asok(0x5625320) entry start -78> 2019-11-14 11:08:45.819066 b36cfc30 2 Event(0x55be068 nevent=5000 time_id=1).set_owner idx=0 owner=3010264112 -77> 2019-11-14 11:08:45.819281 b2ecec30 2 Event(0x55be488 nevent=5000 time_id=1).set_owner idx=1 owner=3001871408 -76> 2019-11-14 11:08:45.819484 b26cdc30 2 Event(0x55be1c8 nevent=5000 time_id=1).set_owner idx=2 owner=2993478704 -75> 2019-11-14 11:08:45.821210 b6f87230 1 Processor -- start -74> 2019-11-14 11:08:45.821472 b6f87230 1 -- - start start -73> 2019-11-14 11:08:45.821506 b6f87230 10 monclient: build_initial_monmap -72> 2019-11-14 11:08:45.821667 b6f87230 10 monclient: init -71> 2019-11-14 11:08:45.821796 b6f87230 5 adding auth protocol: cephx -70> 2019-11-14 11:08:45.821826 b6f87230 10 monclient: auth_supported 2 method cephx -69> 2019-11-14 11:08:45.822606 b6f87230 2 auth: KeyRing::load: loaded key file /var/lib/ceph/mgr/ceph-node1/keyring -68> 2019-11-14 11:08:45.822947 b6f87230 10 monclient: _reopen_session rank -1 -67> 2019-11-14 11:08:45.823131 b6f87230 10 monclient(hunting): picked mon.noname-b con 0x5804d00 addr [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 -66> 2019-11-14 11:08:45.823297 b6f87230 10 monclient(hunting): picked mon.noname-c con 0x5805a00 addr [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -65> 2019-11-14 11:08:45.823437 b6f87230 1 -- - --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- 0x5600680 con 0 -64> 2019-11-14 11:08:45.823555 b6f87230 1 -- - --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- 0x5600820 con 0 -63> 2019-11-14 11:08:45.823646 b6f87230 10 monclient(hunting): _renew_subs -62> 2019-11-14 11:08:45.825773 b26cdc30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 learned_addr learned my addr [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 -61> 2019-11-14 11:08:45.826773 b26cdc30 2 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=0)._process_connection got newly_acked_seq 0 vs out_seq 0 -60> 2019-11-14 11:08:45.827140 b2ecec30 2 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_CONNECTING_WAIT_ACK_SEQ pgs=0 cs=0 l=0)._process_connection got newly_acked_seq 0 vs out_seq 0 -59> 2019-11-14 11:08:45.828839 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 1 0x5630900 mon_map magic: 0 v1 -58> 2019-11-14 11:08:45.829033 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.1 [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 1 ==== mon_map magic: 0 v1 ==== 469+0+0 (726272991 0 0) 0x5630900 con 0x5805a00 -57> 2019-11-14 11:08:45.829141 b06c9c30 10 monclient(hunting): handle_monmap mon_map magic: 0 v1 -56> 2019-11-14 11:08:45.829085 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 2 0x560e540 auth_reply(proto 2 0 (0) Success) v1 -55> 2019-11-14 11:08:45.829165 b2ecec30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=87 cs=1 l=1). rx mon.2 seq 1 0x5630a80 mon_map magic: 0 v1 -54> 2019-11-14 11:08:45.829225 b06c9c30 10 monclient(hunting): got monmap 1, mon.noname-c is now rank -1 -53> 2019-11-14 11:08:45.829244 b06c9c30 10 monclient(hunting): dump: epoch 1 fsid 84f324c2-ac27-4f72-bcb0-7ff1355ee97e last_changed 2019-11-13 14:18:49.133327 created 2019-11-13 14:18:49.133327 0: [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:6789/0 mon.node1 1: [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 mon.fpgh3 2: [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 mon.fpgh2 -52> 2019-11-14 11:08:45.829331 b2ecec30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=87 cs=1 l=1). rx mon.2 seq 2 0x560e700 auth_reply(proto 2 0 (0) Success) v1 -51> 2019-11-14 11:08:45.829402 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.1 [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3710817385 0 0) 0x560e540 con 0x5805a00 -50> 2019-11-14 11:08:45.829512 b06c9c30 10 monclient(hunting): my global_id is 24280 -49> 2019-11-14 11:08:45.829791 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x5600d00 con 0 -48> 2019-11-14 11:08:45.829910 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.2 [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 1 ==== mon_map magic: 0 v1 ==== 469+0+0 (726272991 0 0) 0x5630a80 con 0x5804d00 -47> 2019-11-14 11:08:45.829982 b06c9c30 10 monclient(hunting): handle_monmap mon_map magic: 0 v1 -46> 2019-11-14 11:08:45.830052 b06c9c30 10 monclient(hunting): got monmap 1, mon.fpgh2 is now rank 2 -45> 2019-11-14 11:08:45.830085 b06c9c30 10 monclient(hunting): dump: epoch 1 fsid 84f324c2-ac27-4f72-bcb0-7ff1355ee97e last_changed 2019-11-13 14:18:49.133327 created 2019-11-13 14:18:49.133327 0: [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:6789/0 mon.node1 1: [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 mon.fpgh3 2: [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 mon.fpgh2 -44> 2019-11-14 11:08:45.830244 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.2 [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (2346605455 0 0) 0x560e700 con 0x5804d00 -43> 2019-11-14 11:08:45.830343 b06c9c30 10 monclient(hunting): my global_id is 24308 -42> 2019-11-14 11:08:45.830535 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x5600680 con 0 -41> 2019-11-14 11:08:45.832185 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 3 0x560e8c0 auth_reply(proto 2 0 (0) Success) v1 -40> 2019-11-14 11:08:45.832327 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.1 [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (3339254479 0 0) 0x560e8c0 con 0x5805a00 -39> 2019-11-14 11:08:45.832834 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x5600d00 con 0 -38> 2019-11-14 11:08:45.833078 b2ecec30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=87 cs=1 l=1). rx mon.2 seq 3 0x560e700 auth_reply(proto 2 0 (0) Success) v1 -37> 2019-11-14 11:08:45.833228 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.2 [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (3081092587 0 0) 0x560e700 con 0x5804d00 -36> 2019-11-14 11:08:45.833816 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x5600820 con 0 -35> 2019-11-14 11:08:45.836366 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 4 0x560ea80 auth_reply(proto 2 0 (0) Success) v1 -34> 2019-11-14 11:08:45.836509 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.1 [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 751+0+0 (1301292739 0 0) 0x560ea80 con 0x5805a00 -33> 2019-11-14 11:08:45.837249 b2ecec30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=87 cs=1 l=1). rx mon.2 seq 4 0x560e8c0 auth_reply(proto 2 0 (0) Success) v1 -32> 2019-11-14 11:08:45.837359 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN pgs=87 cs=1 l=1).mark_down -31> 2019-11-14 11:08:45.837425 b06c9c30 2 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86b6]:6789/0 conn(0x5804d00 :-1 s=STATE_OPEN pgs=87 cs=1 l=1)._stop -30> 2019-11-14 11:08:45.837558 b06c9c30 1 monclient: found mon.fpgh3 -29> 2019-11-14 11:08:45.837623 b06c9c30 10 monclient: _send_mon_message to mon.fpgh3 at [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -28> 2019-11-14 11:08:45.837669 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- mon_subscribe({mgrmap=0+,monmap=0+}) v2 -- 0x55beb00 con 0 -27> 2019-11-14 11:08:45.837797 b06c9c30 10 monclient: _check_auth_rotating renewing rotating keys (they expired before 2019-11-14 11:08:15.837792) -26> 2019-11-14 11:08:45.837860 b06c9c30 10 monclient: _send_mon_message to mon.fpgh3 at [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -25> 2019-11-14 11:08:45.837923 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 -- 0x5600680 con 0 -24> 2019-11-14 11:08:45.838036 b6f87230 5 monclient: authenticate success, global_id 24280 -23> 2019-11-14 11:08:45.838182 b6f87230 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201) -22> 2019-11-14 11:08:45.838290 b6f87230 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201) -21> 2019-11-14 11:08:45.838619 b6f87230 5 asok(0x5625320) register_command objecter_requests hook 0x55bc1d8 -20> 2019-11-14 11:08:45.838736 b6f87230 10 monclient: _renew_subs -19> 2019-11-14 11:08:45.838770 b6f87230 10 monclient: _send_mon_message to mon.fpgh3 at [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -18> 2019-11-14 11:08:45.838816 b6f87230 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- mon_subscribe({osdmap=0}) v2 -- 0x55bec60 con 0 -17> 2019-11-14 11:08:45.839247 b6f87230 5 asok(0x5625320) register_command mds_requests hook 0xbe83e770 -16> 2019-11-14 11:08:45.839310 b6f87230 5 asok(0x5625320) register_command mds_sessions hook 0xbe83e770 -15> 2019-11-14 11:08:45.839347 b6f87230 5 asok(0x5625320) register_command dump_cache hook 0xbe83e770 -14> 2019-11-14 11:08:45.839383 b6f87230 5 asok(0x5625320) register_command kick_stale_sessions hook 0xbe83e770 -13> 2019-11-14 11:08:45.839420 b6f87230 5 asok(0x5625320) register_command status hook 0xbe83e770 -12> 2019-11-14 11:08:45.839575 b6f87230 1 mgr send_beacon standby -11> 2019-11-14 11:08:45.839646 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 5 0x55fde00 mgrmap(e 11813) v1 -10> 2019-11-14 11:08:45.839756 b06c9c30 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 <== mon.1 [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 5 ==== mgrmap(e 11813) v1 ==== 237+0+0 (2937343514 0 0) 0x55fde00 con 0x5805a00 -9> 2019-11-14 11:08:45.839778 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 6 0x5630900 mon_map magic: 0 v1 -8> 2019-11-14 11:08:45.840302 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 7 0x560e700 auth_reply(proto 2 0 (0) Success) v1 -7> 2019-11-14 11:08:45.840645 b26cdc30 5 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 >> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 conn(0x5805a00 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=67 cs=1 l=1). rx mon.1 seq 8 0x5600680 osd_map(45..45 src has 1..45) v3 -6> 2019-11-14 11:08:45.840927 b6f87230 10 monclient: _send_mon_message to mon.fpgh3 at [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -5> 2019-11-14 11:08:45.840980 b6f87230 1 -- [2a01:e0a:c:b9f0:21e:6ff:fe30:c8fc]:0/1860748422 --> [2a01:e0a:c:b9f0:21e:6ff:fe36:86ad]:6789/0 -- mgrbeacon mgr.node1(84f324c2-ac27-4f72-bcb0-7ff1355ee97e,24280, -, 0) v6 -- 0x55d6400 con 0 -4> 2019-11-14 11:08:45.841111 b6f87230 4 mgr init Complete. -3> 2019-11-14 11:08:45.841220 b06c9c30 4 mgr ms_dispatch standby mgrmap(e 11813) v1 -2> 2019-11-14 11:08:45.841266 b06c9c30 4 mgr handle_mgr_map received map epoch 11813 -1> 2019-11-14 11:08:45.841282 b06c9c30 4 mgr handle_mgr_map active in map: 0 active is 0 0> 2019-11-14 11:08:45.866590 b06c9c30 -1 *** Caught signal (Segmentation fault) ** in thread b06c9c30 thread_name:ms_dispatch ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable) 1: (()+0x302eac) [0x7d6eac] 2: (()+0x25750) [0xb6862750] 3: (_ULarm_step()+0x5b) [0xb67eecec] 4: (()+0x255e8) [0xb6cb05e8] 5: (GetStackTrace(void**, int, int)+0x25) [0xb6cb0a3e] 6: (tcmalloc::PageHeap::GrowHeap(unsigned int)+0xb9) [0xb6ca536a] 7: (tcmalloc::PageHeap::New(unsigned int)+0x79) [0xb6ca55e6] 8: (tcmalloc::CentralFreeList::Populate()+0x71) [0xb6ca45ce] 9: (tcmalloc::CentralFreeList::FetchFromOneSpansSafe(int, void**, void**)+0x1b) [0xb6ca4760] 10: (tcmalloc::CentralFreeList::RemoveRange(void**, void**, int)+0x6d) [0xb6ca47de] 11: (tcmalloc::ThreadCache::FetchFromCentralCache(unsigned int, unsigned int)+0x51) [0xb6ca6a56] 12: (malloc()+0x22d) [0xb6cb1a8e] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 1/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 1 reserver 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 4/ 5 memdb 1/ 5 kinetic 1/ 5 fuse 1/ 5 mgr 1/ 5 mgrc 1/ 5 dpdk 1/ 5 eventtrace -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-mgr.node1.log --- end dump of recent events --- *Romain Raynaud* _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx