Re: Upgrade 16.2.6 -> 16.2.7 - MON assertion failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan

Here it is....

Thanks, Chris

root@xxxxtstmon01:/var/log/ceph# ceph fs dump

e254
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 2
Filesystem 'cephfs' (2)
fs_name	cephfs
epoch	254
flags	12
created	2021-05-22T15:28:40.855967+0000
modified	2021-12-09T14:56:59.393531+0000
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	47773
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds	1
in	0
up	{0=85733688}
failed	
damaged	
stopped	
data_pools	[36,40]
metadata_pool	35
inline_data	disabled
balancer	
standby_count_wanted	1
[mds.xxxxtstmon01{0:85733688} state up:active seq 8 addr [v2:xx.xx.140.242:6800/3542422062,v1:xx.xx.140.242:6801/3542422062] compat {c=[1],r=[1],i=[7ff]}]
Standby daemons: [mds.xxxxtstmon02{ffffffff:4268b67} state up:standby seq 1 addr [v2:xx.xx.140.243:1a90/bac038e0,v1:xx.xx.140.243:1a91/bac038e0] compat {c=[1],r=[1],i=[1]}]
dumped fsmap epoch 254
root@xxxxtstmon01:/var/log/ceph#



On 09/12/2021 16:25, Dan van der Ster wrote:
Hi,

This is clearly not expected. I pinged cephfs devs on IRC.

Could you please share output of `ceph fs dump`

-- dan

On Thu, Dec 9, 2021 at 4:40 PM Chris Palmer<chris.palmer@xxxxxxxxx>  wrote:
Hi

I've just started an upgrade of a test cluster from 16.2.6 -> 16.2.7 and
immediately hit a problem.

The cluster started as octopus, and has upgraded through to 16.2.6
without any trouble. It is a conventional deployment on Debian 10, NOT
using cephadm. All was clean before the upgrade. It contains nodes as
follows:
- Node 1: MON, MGR, MDS, RGW
- Node 2: MON, MGR, MDS, RGW
- Node 3: MON
- Node 4-6: OSDs

In the absence of any specific upgrade instructions for 16.2.7, I
upgraded Node 1 and rebooted. The MON on that host will now not start,
throwing the following assertion:

2021-12-09T14:56:40.098+00:00 xxxxtstmon01 ceph-mon[960]: /build/ceph-16.2.7/src/mds/FSMap.cc: In function 'void FSMap::sanity(bool) const' thread 7f2d309085c0 time 2021-12-09T14:56:40.098395+0000
2021-12-09T14:56:40.098+00:00 xxxxtstmon01 ceph-mon[960]: /build/ceph-16.2.7/src/mds/FSMap.cc: 868: FAILED ceph_assert(info.compat.writeable(fs->mds_map.compat))
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14b) [0x7f2d3222423c]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  2: /usr/lib/ceph/libceph-common.so.2(+0x277414) [0x7f2d32224414]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  3: (FSMap::sanity(bool) const+0x2a8) [0x7f2d327331c8]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  4: (MDSMonitor::update_from_paxos(bool*)+0x396) [0x55a32fe6b546]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  5: (PaxosService::refresh(bool*)+0x10a) [0x55a32fd960ca]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  6: (Monitor::refresh_from_paxos(bool*)+0x17c) [0x55a32fc54bec]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  7: (Monitor::init_paxos()+0xfc) [0x55a32fc54e9c]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  8: (Monitor::preinit()+0xbb9) [0x55a32fc7eb09]
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  9: main()
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  10: __libc_start_main()
2021-12-09T14:56:40.103+00:00 xxxxtstmon01 ceph-mon[960]:  11: _start()

ceph health detail merely shows mon01 down, and the 5 crashes before the service stopped auto-restarting.

Any ideas please?

Thanks, Chris
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux