Issue Upgrading to 16.2.7 related to mon_mds_skip_sanity.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I had originally asked the question on the pull request here:
https://github.com/ceph/ceph/pull/44131 and was asked to continue the
discussion on this list.

Last night I upgraded my cluster from 16.2.5 (I think) to 16.2.7.
Unfortunately, thinking this was a minor patch, I failed to read the
upgrade instructions so I did not set "mon_mds_skip_sanity = true" before
upgrading and I'm not using cephadm. As a result, the monitor on my first
node crashed after upgrade. I then discovered the recommendation, set the
flag, and the monitor started. Once the monitor was up, I removed the flag
from the config, restarted it, and it crashed again. Thinking maybe the
upgrade wasn't fully complete _or_ I had to upgrade all of my monitors
first, I went ahead and upgraded the rest of the cluster with the flag set
and let it sit overnight.

Today I attempted to remove the flag on one of my monitors and discovered
that it still crashes. Here's my `ceph.conf` just in case that's useful:

```ini
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.11.0.0/16
filestore_xattr_use_omap = true
fsid = redacted
mon_allow_pool_delete = true
mon_host = 10.10.0.1, 10.10.0.2, 10.10.0.3
mon_pg_warn_min_per_osd = 10
osd_journal_size = 5120
osd_pool_default_min_size = 1
public_network = 10.10.0.0/16

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.0]
host = ceph1
mds_standby_for_name = pve

[mds.1]
host = ceph2
mds_standby_for_name = pve

[mds.2]
host = ceph3
mds_standby_for_name = pve

[mds.3]
host = ceph4
mds_standby_for_name = pve

[mds.4]
host = ceph5
mds standby for name = pve

[mon.0]
host = ceph2
     mon_mds_skip_sanity = true

[mon.1]
host = ceph3
     mon_mds_skip_sanity = true

[mon.2]
host = ceph4
     mon_mds_skip_sanity = true
```

I've confirmed that my monitors are on 16.2.7:

```json
root@ceph2:/var/log/ceph# sudo ceph mon versions
{
    "ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific
(stable)": 3
}
```

Here's a crash log from one of the monitors:

```
2021-12-23T11:33:16.689-0500 7f08d5146580  0 starting mon.0 rank 0 at
public addrs [v2:10.10.0.1:3300/0,v1:10.10.0.1:6789/0] at bind addrs [v2:
10.10.0.1:3300/0,v1:10.10.0.1:6789/0] mon_data /var/lib/ceph/mon/ceph-0
fsid --redacted--
2021-12-23T11:33:16.689-0500 7f08d5146580  1 mon.0@-1(???) e14 preinit fsid
--redacted--
2021-12-23T11:33:16.689-0500 7f08d5146580  0 mon.0@-1(???).mds e3719 new map
2021-12-23T11:33:16.689-0500 7f08d5146580  0 mon.0@-1(???).mds e3719
print_map
e3719
enable_multiple, ever_enabled_multiple: 1,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses
inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 4

Filesystem 'clusterfs-capacity' (4)
fs_name clusterfs-capacity
epoch 3719
flags 12
created 2021-04-25T20:22:33.434467-0400
<http://voice.google.com/calls?a=nc,%2B14344670400>
modified 2021-12-23T04:00:59.962394-0500
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 265418
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline
data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=448184627}
failed
damaged
stopped
data_pools [18]
metadata_pool 19
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph2{0:448184627} state up:active seq 5103 addr [v2:
10.10.0.1:6800/4098373857,v1:10.10.0.1:6801/4098373857] compat
{c=[1],r=[1],i=[7ff]}]


Filesystem 'clusterfs-performance' (5)
fs_name clusterfs-performance
epoch 3710
flags 12
created 2021-09-07T19:50:05.174359-0400
modified 2021-12-22T21:40:43.813907-0500
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 261954
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline
data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=448071169}
failed
damaged
stopped
data_pools [28]
metadata_pool 27
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph4{0:448071169} state up:active seq 2852 addr [v2:
10.10.0.3:6800/368534096,v1:10.10.0.3:6801/368534096] compat
{c=[1],r=[1],i=[77f]}]


Standby daemons:

[mds.ceph3{-1:448024430} state up:standby seq 1 addr [v2:
10.10.0.2:6800/535990846,v1:10.10.0.2:6801/535990846] compat
{c=[1],r=[1],i=[77f]}]
[mds.ceph5{-1:448169888} state up:standby seq 1 addr [v2:
10.10.0.20:6800/1349962329,v1:10.10.0.20:6801/1349962329] compat
{c=[1],r=[1],i=[77f]}]
[mds.ceph1{-1:448321175} state up:standby seq 1 addr [v2:
10.10.0.32:6800/881362738,v1:10.10.0.32:6801/881362738] compat
{c=[1],r=[1],i=[7ff]}]

2021-12-23T11:33:16.693-0500 7f08d5146580 -1 ./src/mds/FSMap.cc: In
function 'void FSMap::sanity(bool) const' thread 7f08d5146580 time
2021-12-23T11:33:16.692037-0500
./src/mds/FSMap.cc: 868: FAILED
ceph_assert(info.compat.writeable(fs->mds_map.compat))

 ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific
(stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x124) [0x7f08d6022046]
 2: /usr/lib/ceph/libceph-common.so.2(+0x2511d1) [0x7f08d60221d1]
 3: (FSMap::sanity(bool) const+0x4b6) [0x7f08d65823b6]
 4: (MDSMonitor::update_from_paxos(bool*)+0x4aa) [0x55857ba2da6a]
 5: (Monitor::refresh_from_paxos(bool*)+0x163) [0x55857b7f8f03]
 6: (Monitor::preinit()+0x9af) [0x55857b82510f]
 7: main()
 8: __libc_start_main()
 9: _start()

2021-12-23T11:33:16.697-0500 7f08d5146580 -1 *** Caught signal (Aborted) **
 in thread 7f08d5146580 thread_name:ceph-mon

 ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific
(stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f08d5b02140]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x16e) [0x7f08d6022090]
 5: /usr/lib/ceph/libceph-common.so.2(+0x2511d1) [0x7f08d60221d1]
 6: (FSMap::sanity(bool) const+0x4b6) [0x7f08d65823b6]
 7: (MDSMonitor::update_from_paxos(bool*)+0x4aa) [0x55857ba2da6a]
 8: (Monitor::refresh_from_paxos(bool*)+0x163) [0x55857b7f8f03]
 9: (Monitor::preinit()+0x9af) [0x55857b82510f]
 10: main()
 11: __libc_start_main()
 12: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- begin dump of recent events ---
   -82> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command assert hook 0x55857d76e610
   -81> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command abort hook 0x55857d76e610
   -80> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command leak_some_memory hook 0x55857d76e610
   -79> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perfcounters_dump hook 0x55857d76e610
   -78> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command 1 hook 0x55857d76e610
   -77> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perf dump hook 0x55857d76e610
   -76> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perfcounters_schema hook 0x55857d76e610
   -75> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perf histogram dump hook 0x55857d76e610
   -74> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command 2 hook 0x55857d76e610
   -73> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perf schema hook 0x55857d76e610
   -72> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perf histogram schema hook 0x55857d76e610
   -71> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command perf reset hook 0x55857d76e610
   -70> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config show hook 0x55857d76e610
   -69> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config help hook 0x55857d76e610
   -68> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config set hook 0x55857d76e610
   -67> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config unset hook 0x55857d76e610
   -66> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config get hook 0x55857d76e610
   -65> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config diff hook 0x55857d76e610
   -64> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command config diff get hook 0x55857d76e610
   -63> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command injectargs hook 0x55857d76e610
   -62> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command log flush hook 0x55857d76e610
   -61> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command log dump hook 0x55857d76e610
   -60> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command log reopen hook 0x55857d76e610
   -59> 2021-12-23T11:33:16.545-0500 7f08d5146580  5 asok(0x55857d820000)
register_command dump_mempools hook 0x55857e418068
   -58> 2021-12-23T11:33:16.553-0500 7f08d5146580  0 set uid:gid to
64045:64045 (ceph:ceph)
   -57> 2021-12-23T11:33:16.553-0500 7f08d5146580  0 ceph version 16.2.7
(f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable), process
ceph-mon, pid 704498
   -56> 2021-12-23T11:33:16.553-0500 7f08d5146580  0 pidfile_write: ignore
empty --pid-file
   -55> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
init /var/run/ceph/ceph-mon.0.asok
   -54> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
bind_and_listen /var/run/ceph/ceph-mon.0.asok
   -53> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
register_command 0 hook 0x55857d76dc80
   -52> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
register_command version hook 0x55857d76dc80
   -51> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
register_command git_version hook 0x55857d76dc80
   -50> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
register_command help hook 0x55857d76e520
   -49> 2021-12-23T11:33:16.553-0500 7f08d5146580  5 asok(0x55857d820000)
register_command get_command_descriptions hook 0x55857d76e500
   -48> 2021-12-23T11:33:16.553-0500 7f08d3f26700  5 asok(0x55857d820000)
entry start
   -47> 2021-12-23T11:33:16.573-0500 7f08d5146580  0 load: jerasure load:
lrc load: isa
   -46> 2021-12-23T11:33:16.625-0500 7f08d5146580  1 leveldb: Recovering
log #19956085
   -45> 2021-12-23T11:33:16.629-0500 7f08d5146580  1 leveldb: Level-0 table
#19956112: started
   -44> 2021-12-23T11:33:16.669-0500 7f08d5146580  1 leveldb: Level-0 table
#19956112: 2636812 bytes OK
   -43> 2021-12-23T11:33:16.685-0500 7f08d5146580  1 leveldb: Delete type=0
#19956085

   -42> 2021-12-23T11:33:16.685-0500 7f08d5146580  1 leveldb: Delete type=3
#19934065

   -41> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding auth protocol: cephx
   -40> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding auth protocol: cephx
   -39> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding auth protocol: cephx
   -38> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -37> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -36> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -35> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -34> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -33> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -32> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -31> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -30> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -29> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -28> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: crc
   -27> 2021-12-23T11:33:16.685-0500 7f08d5146580  5
AuthRegistry(0x55857ea22140) adding con mode: secure
   -26> 2021-12-23T11:33:16.685-0500 7f08d5146580  2 auth: KeyRing::load:
loaded key file /var/lib/ceph/mon/ceph-0/keyring
   -25> 2021-12-23T11:33:16.689-0500 7f08d5146580  0 starting mon.0 rank 0
at public addrs [v2:10.10.0.1:3300/0,v1:10.10.0.1:6789/0] at bind addrs [v2:
10.10.0.1:3300/0,v1:10.10.0.1:6789/0] mon_data /var/lib/ceph/mon/ceph-0
fsid --redacted--
   -24> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding auth protocol: cephx
   -23> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding auth protocol: cephx
   -22> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding auth protocol: cephx
   -21> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
   -20> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -19> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
   -18> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -17> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
   -16> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -15> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -14> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
   -13> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -12> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
   -11> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: crc
   -10> 2021-12-23T11:33:16.689-0500 7f08d5146580  5
AuthRegistry(0x55857ea22a40) adding con mode: secure
    -9> 2021-12-23T11:33:16.689-0500 7f08d5146580  2 auth: KeyRing::load:
loaded key file /var/lib/ceph/mon/ceph-0/keyring
    -8> 2021-12-23T11:33:16.689-0500 7f08d5146580  5 adding auth protocol:
cephx
    -7> 2021-12-23T11:33:16.689-0500 7f08d5146580  5 adding auth protocol:
cephx
    -6> 2021-12-23T11:33:16.689-0500 7f08d5146580 10 log_channel(cluster)
update_config to_monitors: true to_syslog: false syslog_facility: daemon
prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
    -5> 2021-12-23T11:33:16.689-0500 7f08d5146580 10 log_channel(audit)
update_config to_monitors: true to_syslog: false syslog_facility: local0
prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
    -4> 2021-12-23T11:33:16.689-0500 7f08d5146580  1 mon.0@-1(???) e14
preinit fsid --redacted--
    -3> 2021-12-23T11:33:16.689-0500 7f08d5146580  0 mon.0@-1(???).mds
e3719 new map
    -2> 2021-12-23T11:33:16.689-0500 7f08d5146580  0 mon.0@-1(???).mds
e3719 print_map
e3719
enable_multiple, ever_enabled_multiple: 1,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses
inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 4

Filesystem 'clusterfs-capacity' (4)
fs_name clusterfs-capacity
epoch 3719
flags 12
created 2021-04-25T20:22:33.434467-0400
modified 2021-12-23T04:00:59.962394-0500
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 265418
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline
data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=448184627}
failed
damaged
stopped
data_pools [18]
metadata_pool 19
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph2{0:448184627} state up:active seq 5103 addr [v2:
10.10.0.1:6800/4098373857,v1:10.10.0.1:6801/4098373857] compat
{c=[1],r=[1],i=[7ff]}]


Filesystem 'clusterfs-performance' (5)
fs_name clusterfs-performance
epoch 3710
flags 12
created 2021-09-07T19:50:05.174359-0400
modified 2021-12-22T21:40:43.813907-0500
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 261954
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline
data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=448071169}
failed
damaged
stopped
data_pools [28]
metadata_pool 27
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph4{0:448071169} state up:active seq 2852 addr [v2:
10.10.0.3:6800/368534096,v1:10.10.0.3:6801/368534096] compat
{c=[1],r=[1],i=[77f]}]


Standby daemons:

[mds.ceph3{-1:448024430} state up:standby seq 1 addr [v2:
10.10.0.2:6800/535990846,v1:10.10.0.2:6801/535990846] compat
{c=[1],r=[1],i=[77f]}]
[mds.ceph5{-1:448169888} state up:standby seq 1 addr [v2:
10.10.0.20:6800/1349962329,v1:10.10.0.20:6801/1349962329] compat
{c=[1],r=[1],i=[77f]}]
[mds.ceph1{-1:448321175} state up:standby seq 1 addr [v2:
10.10.0.32:6800/881362738,v1:10.10.0.32:6801/881362738] compat
{c=[1],r=[1],i=[7ff]}]

    -1> 2021-12-23T11:33:16.693-0500 7f08d5146580 -1 ./src/mds/FSMap.cc: In
function 'void FSMap::sanity(bool) const' thread 7f08d5146580 time
2021-12-23T11:33:16.692037-0500
./src/mds/FSMap.cc: 868: FAILED
ceph_assert(info.compat.writeable(fs->mds_map.compat))

 ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific
(stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x124) [0x7f08d6022046]
 2: /usr/lib/ceph/libceph-common.so.2(+0x2511d1) [0x7f08d60221d1]
 3: (FSMap::sanity(bool) const+0x4b6) [0x7f08d65823b6]
 4: (MDSMonitor::update_from_paxos(bool*)+0x4aa) [0x55857ba2da6a]
 5: (Monitor::refresh_from_paxos(bool*)+0x163) [0x55857b7f8f03]
 6: (Monitor::preinit()+0x9af) [0x55857b82510f]
 7: main()
 8: __libc_start_main()
 9: _start()

     0> 2021-12-23T11:33:16.697-0500 7f08d5146580 -1 *** Caught signal
(Aborted) **
 in thread 7f08d5146580 thread_name:ceph-mon

 ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific
(stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f08d5b02140]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x16e) [0x7f08d6022090]
 5: /usr/lib/ceph/libceph-common.so.2(+0x2511d1) [0x7f08d60221d1]
 6: (FSMap::sanity(bool) const+0x4b6) [0x7f08d65823b6]
 7: (MDSMonitor::update_from_paxos(bool*)+0x4aa) [0x55857ba2da6a]
 8: (Monitor::refresh_from_paxos(bool*)+0x163) [0x55857b7f8f03]
 9: (Monitor::preinit()+0x9af) [0x55857b82510f]
 10: main()
 11: __libc_start_main()
 12: _start()
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 rbd_pwl
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 immutable_obj_cache
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 0 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 rgw_sync
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 fuse
   2/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
   1/ 5 prioritycache
   0/ 5 test
   0/ 5 cephfs_mirror
   0/ 5 cephsqlite
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
  139675892344576 / admin_socket
  139675911349632 / ceph-mon
  max_recent     10000
  max_new        10000
  log_file /var/log/ceph/ceph-mon.0.log
--- end dump of recent events ---
```
Perhaps this is a different sanity check failing that really should fail
and needs to be fixed in my cluster?

Ilya Kogan
w: github.com/ikogan   e:  ikogan@xxxxxxxxxxxxx
  <http://twitter.com/ilkogan>    <https://www.linkedin.com/in/ilyakogan/>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux