The log output you pasted suggests that an oom killer is responsible
for the failure, can you confirm that? Are other services located on
that node that use too much RAM?
Zitat von Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
Hi Guys,
I am in the upgrade proccess from mimic to nautilus.
The first step was to upgrade one cephmon, but after that this
cephmon can not rejoin the cluster I see this at logs:
2022-06-29 15:54:48.200 7fd3d015f1c0 0 ceph version 14.2.22
(ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable),
process ceph-mon, pid 6121
2022-06-29 15:54:48.206 7fd3d015f1c0 0 pidfile_write: ignore empty --pid-file
2022-06-29 15:54:48.339 7fd3d015f1c0 0 load: jerasure load: lrc load: isa
This machine is mon and mrg and the mgr daemon y working fine after upgrade
At log messajes:
Jun 29 15:54:38 cephmon03 systemd: ceph-mon@cephmon03.service failed.
Jun 29 15:54:47 cephmon03 systemd: ceph-mon@cephmon03.service
holdoff time over, scheduling restart.
Jun 29 15:54:47 cephmon03 systemd: Stopped Ceph cluster monitor daemon.
Jun 29 15:54:47 cephmon03 systemd: Started Ceph cluster monitor daemon.
Jun 29 15:56:43 cephmon03 kernel: pickup invoked oom-killer:
gfp_mask=0x201da, order=0, oom_score_adj=0
Jun 29 15:56:43 cephmon03 kernel: pickup cpuset=/ mems_allowed=0
Jun 29 15:56:43 cephmon03 kernel: CPU: 1 PID: 1047 Comm: pickup Not
tainted 3.10.0-957.5.1.el7.x86_64 #1
Jun 29 15:56:43 cephmon03 kernel: Call Trace:
Jun 29 15:56:43 cephmon03 kernel: [<ffffffff81761e41>] dump_stack+0x19/0x1b
..........
Any advise?
--
=====================================================
Ibán Cabrillo Bartolomé
Instituto de Fisica de Cantabria (IFCA-CSIC)
Santander, Spain
Tel: +34942200969/+34669930421
Responsable del Servicio de Computación Avanzada
======================================================
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx