I have tried lowering pool size/min from 3/2 to 3/1 to be able to
restart OSDs.
I have restarted 2 OSD in the same host, they start but kill themself
after some seconds:
ceph.log:
2022-05-26T17:28:27.482801+0200 mon.0 (mon.1) 347271 : cluster [INF]
osd.2 [v2:192.168.133.102:6808/3393141,v1:192.168.133.102:6809/3393141] boot
2022-05-26T17:28:48.698611+0200 mon.0 (mon.1) 347279 : cluster [DBG]
osd.2 reported failed by osd.9
2022-05-26T17:28:50.179518+0200 mon.0 (mon.1) 347280 : cluster [DBG]
osd.2 reported failed by osd.1
2022-05-26T17:28:50.818647+0200 mon.0 (mon.1) 347281 : cluster [DBG]
osd.2 reported failed by osd.10
2022-05-26T17:28:51.458805+0200 mon.0 (mon.1) 347282 : cluster [INF]
osd.2 failed (root=default,host=proxmox2) (2 reporters from different
host after 21.279
366 >= grace 20.853243)
2022-05-26T17:28:51.713265+0200 osd.2 (osd.2) 7 : cluster [WRN] Monitor
daemon marked osd.2 down, but it is still running
2022-05-26T17:28:51.713275+0200 osd.2 (osd.2) 8 : cluster [DBG] map
e5020 wrongly marked me down at e5020
2022-05-26T17:28:51.713612+0200 mon.0 (mon.1) 347285 : cluster [INF]
osd.2 marked itself dead as of e5020
ceph-osd.2.log:
2022-05-26T17:29:46.530+0200 7fedeaead700 0 log_channel(cluster) log
[WRN] : Monitor daemon marked osd.2 down, but it is still running
2022-05-26T17:29:46.530+0200 7fedeaead700 0 log_channel(cluster) log
[DBG] : map e5027 wrongly marked me down at e5026
2022-05-26T17:29:46.530+0200 7fedeaead700 -1 osd.2 5027
_committed_osd_maps marked down 6 > osd_max_markdown_count 5 in last
600.000000 seconds, shutting do
wn
2022-05-26T17:29:46.530+0200 7fedeaead700 1 osd.2 5027
start_waiting_for_healthy
2022-05-26T17:29:46.530+0200 7fedda635700 1 osd.2 pg_epoch: 5026
pg[1.1e6( v 5000'5925155 (4856'5922715,5000'5925155]
local-lis/les=5024/5025 n=1093 ec=20/
20 lis/c=5024/4272 les/c/f=5025/4273/0 sis=5026 pruub=13.240456586s) [9]
r=-1 lpr=5026 pi=[4272,5026)/2 luod=0'0 crt=5000'5925155 mlcod 0'0
active pruub 180
.635366961s@ mbc={}] start_peering_interval up [9,2] -> [9], acting
[9,2] -> [9], acting_primary 9 -> 9, up_primary 9 -> 9, role 1 -> -1,
features acting 45
40138292840890367 upacting 4540138292840890367
2022-05-26T17:29:46.530+0200 7feddae36700 1 osd.2 pg_epoch: 5026
pg[1.dc( v 4872'9962220 (4856'9959653,4872'9962220]
local-lis/les=5024/5025 n=1069 ec=20/2
0 lis/c=5024/3469 les/c/f=5025/3471/0 sis=5026 pruub=13.240585004s) [9]
r=-1 lpr=5026 pi=[3469,5026)/2 crt=4872'9962220 lcod 0'0 mlcod 0'0
active pruub 180.
635491411s@ mbc={}] start_peering_interval up [2,9] -> [9], acting [2,9]
-> [9], acting_primary 2 -> 9, up_primary 2 -> 9, role 0 -> -1, features
acting 454
0138292840890367 upacting 4540138292840890367
[...]
2022-05-26T17:29:46.530+0200 7fedd8e32700 1 osd.2 pg_epoch: 5027
pg[1.68( v 4864'4471702 (4851'4469107,4864'4471702]
local-lis/les=5024/5025 n=1072 ec=20/2
0 lis/c=5024/3816 les/c/f=5025/3817/0 sis=5026 pruub=13.239652864s) [11]
r=-1 lpr=5026 pi=[3816,5026)/2 crt=4864'4471702 lcod 0'0 mlcod 0'0
unknown NOTIFY p
ruub 180.635004919s@ mbc={}] state<Start>: transitioning to Stray
2022-05-26T17:29:46.530+0200 7fedf2e32700 -1 received signal: Interrupt
from Kernel ( Could be generated by pthread_kill(), raise(), abort(),
alarm() ) UID
: 0
2022-05-26T17:29:46.530+0200 7fedf2e32700 -1 osd.2 5027 *** Got signal
Interrupt ***
2022-05-26T17:29:46.530+0200 7fedf2e32700 -1 osd.2 5027 *** Immediate
shutdown (osd_fast_shutdown=true) ***
2022-05-26T17:29:46.530+0200 7feddae36700 1 osd.2 pg_epoch: 5026
pg[1.c3( v 5022'6643639 (4856'6640985,5022'6643639]
local-lis/les=5024/5025 n=1174 ec=20/2
0 lis/c=5024/3911 les/c/f=5025/3912/0 sis=5026 pruub=13.247752712s) [11]
r=-1 lpr=5026 pi=[3911,5026)/2 luod=0'0 crt=5022'6643639 lcod
5015'6643638 mlcod 0'
0 active pruub 180.643263386s@ mbc={}] start_peering_interval up [11,2]
-> [11], acting [11,2] -> [11], acting_primary 11 -> 11, up_primary 11
-> 11, role 1
-> -1, features acting 4540138292840890367 upacting 4540138292840890367
2022-05-26T17:29:46.530+0200 7fedd9e34700 1 osd.2 pg_epoch: 5026
pg[1.1f6( v 5000'4392558 (4856'4390258,5000'4392558]
local-lis/les=5024/5025 n=1093 ec=20/
20 lis/c=5024/3427 les/c/f=5025/3428/0 sis=5026 pruub=13.242960713s) [8]
r=-1 lpr=5026 pi=[3427,5026)/2 luod=0'0 crt=5000'4392558 mlcod 0'0
active pruub 180
.638506428s@ mbc={}] start_peering_interval up [8,2] -> [8], acting
[8,2] -> [8], acting_primary 8 -> 8, up_primary 8 -> 8, role 1 -> -1,
features acting 45
40138292840890367 upacting 4540138292840890367
2022-05-26T17:29:46.530+0200 7fedda635700 1 osd.2 pg_epoch: 5027
pg[1.191( v 4872'3996007 (4856'3993771,4872'3996007]
local-lis/les=5024/5025 n=1092 ec=20/
20 lis/c=5024/4304 les/c/f=5025/4305/0 sis=5026 pruub=13.244835096s)
[10] r=-1 lpr=5026 pi=[4304,5026)/2 crt=4872'3996007 lcod 0'0 mlcod 0'0
unknown NOTIFY
pruub 180.640266809s@ mbc={}] state<Start>: transitioning to Stray
2022-05-26T17:29:46.530+0200 7fedd9e34700 1 osd.2 pg_epoch: 5027
pg[1.1f6( v 5000'4392558 (4856'4390258,5000'4392558]
local-lis/les=5024/5025 n=1093 ec=20/
20 lis/c=5024/3427 les/c/f=5025/3428/0 sis=5026 pruub=13.242886788s) [8]
r=-1 lpr=5026 pi=[3427,5026)/2 crt=5000'4392558 mlcod 0'0 unknown NOTIFY
pruub 180.
638506428s@ mbc={}] state<Start>: transitioning to Stray
2022-05-26T17:29:46.530+0200 7feddae36700 1 osd.2 pg_epoch: 5027
pg[1.c3( v 5022'6643639 (4856'6640985,5022'6643639]
local-lis/les=5024/5025 n=1174 ec=20/2
0 lis/c=5024/3911 les/c/f=5025/3912/0 sis=5026 pruub=13.247597920s) [11]
r=-1 lpr=5026 pi=[3911,5026)/2 crt=5022'6643639 lcod 5015'6643638 mlcod
0'0 unknown
NOTIFY pruub 180.643263386s@ mbc={}] state<Start>: transitioning to Stray
<EOF>
It seems other OSDs can't connect to that OSD after restart...
El 26/5/22 a las 16:54, Eneko Lacunza escribió:
It is a v15.2.15 cluster (Octopus).
Sadly I can't try to reboot mon and osd currently (I don't have enough
redundancy).
I have seen that OSDs also complain:
2022-05-26T16:10:29.859+0200 7f4e2dafa700 0 auth: could not find
secret_id=44221
2022-05-26T16:10:29.859+0200 7f4e2dafa700 0 cephx: verify_authorizer
could not get service secret for service osd secret_id=44221
But one of them doesn't complain since 4 hours ago... no idea why. All
OSDs complain about the same secret_id.
Thanks
El 26/5/22 a las 16:43, Nico Schottelius escribió:
Is this a mimic/nautilus cluster? I think I remember a similar issue
about 3-4 years ago with mimic (or even luminous?) at the time.
Afair, we were required not reboot all mgrs, mons and finally all osds
until things started to stabilise.
Best regards,
Nico
Eneko Lacunza<elacunza@xxxxxxxxx> writes:
Thanks, yes I have stopped active mgr and let standby take over, twice
at least, but no change.
El 26/5/22 a las 16:08, Eugen Block escribió:
First thing I would try is a mgr failover.
Zitat von Eneko Lacunza<elacunza@xxxxxxxxx>:
Hi all,
I'm trying to diagnose a issue in a tiny cluster that is showing
the following status:
root@proxmox3:~# ceph -s
cluster:
id: 80d78bb2-6be6-4dff-b41d-60d52e650016
health: HEALTH_WARN
1/3 mons down, quorum 0,proxmox3
Reduced data availability: 513 pgs inactive
services:
mon: 3 daemons, quorum 0,proxmox3 (age 3h), out of quorum: 1
mgr: proxmox3(active, since 16m), standbys: proxmox2
osd: 12 osds: 8 up (since 3h), 8 in (since 3h)
task status:
data:
pools: 2 pools, 513 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
513 unknown
Cluster has 3 nodes, each with 4 OSDs. One of the nodes was offline
for 3 weeks, and when bringing it back online VMs stalled on disk
I/O.
Node has been shut down again and we're trying to understand the
status, an then will try ti diagnose issue with the troubled node.
Currently VMs are working and can read RBD volumes, but there seems
to be some kind of mgr issue (?) with stats.
There is no firewall on the nodes nor between the 3 nodes (all on
the same switch). Ping is working for both CEph public and private
networks.
MGR log show this continuosly:
2022-05-26T13:49:45.603+0200 7fb78ba3f700 0 auth: could not find
secret_id=1892
2022-05-26T13:49:45.603+0200 7fb78ba3f700 0 cephx:
verify_authorizer could not get service secret for service mgr
secret_id=1892
2022-05-26T13:49:45.983+0200 7fb77a18d700 1 mgr.server send_report
Not sending PG status to monitor yet, waiting for OSDs
2022-05-26T13:49:47.983+0200 7fb77a18d700 1 mgr.server send_report
Not sending PG status to monitor yet, waiting for OSDs
2022-05-26T13:49:49.983+0200 7fb77a18d700 1 mgr.server send_report
Not sending PG status to monitor yet, waiting for OSDs
2022-05-26T13:49:51.983+0200 7fb77a18d700 1 mgr.server send_report
Giving up on OSDs that haven't reported yet, sending potentially
incomplete PG state to m
on
2022-05-26T13:49:51.983+0200 7fb77a18d700 0 log_channel(cluster)
log [DBG] : pgmap v3: 513 pgs: 513 unknown; 0 B data, 0 B used, 0 B
/ 0 B avail
2022-05-26T13:49:53.983+0200 7fb77a18d700 0 log_channel(cluster)
log [DBG] : pgmap v4: 513 pgs: 513 unknown; 0 B data, 0 B used, 0 B
/ 0 B avail
2022-05-26T13:49:55.983+0200 7fb77a18d700 0 log_channel(cluster)
log [DBG] : pgmap v5: 513 pgs: 513 unknown; 0 B data, 0 B used, 0 B
/ 0 B avail
2022-05-26T13:49:57.987+0200 7fb77a18d700 0 log_channel(cluster)
log [DBG] : pgmap v6: 513 pgs: 513 unknown; 0 B data, 0 B used, 0 B
/ 0 B avail
2022-05-26T13:49:58.403+0200 7fb78ba3f700 0 auth: could not find
secret_id=1892
2022-05-26T13:49:58.403+0200 7fb78ba3f700 0 cephx:
verify_authorizer could not get service secret for service mgr
secret_id=1892
So it seems that mgr is unable to contact OSDs for stats, then
reports bad info to mon.
I see the following OSD ports open:
tcp 0 0 192.168.134.102:6800 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.133.102:6800 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.134.102:6801 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.133.102:6801 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.134.102:6802 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.133.102:6802 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.134.102:6803 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.133.102:6803 0.0.0.0:* LISTEN
2268/ceph-osd
tcp 0 0 192.168.134.102:6804 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.133.102:6804 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.134.102:6805 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.133.102:6805 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.134.102:6806 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.133.102:6806 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.134.102:6807 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.133.102:6807 0.0.0.0:* LISTEN
2271/ceph-osd
tcp 0 0 192.168.134.102:6808 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.133.102:6808 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.134.102:6809 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.133.102:6809 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.134.102:6810 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.133.102:6810 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.134.102:6811 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.133.102:6811 0.0.0.0:* LISTEN
2267/ceph-osd
tcp 0 0 192.168.134.102:6812 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.133.102:6812 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.134.102:6813 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.133.102:6813 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.134.102:6814 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.133.102:6814 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.134.102:6815 0.0.0.0:* LISTEN
2274/ceph-osd
tcp 0 0 192.168.133.102:6815 0.0.0.0:* LISTEN
2274/ceph-osd
Any idea what can I check/what's going on?
Thanks
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
--
Sustainable and modern Infrastructures by ungleich.ch
EnekoLacunza
Director Técnico | Zuzendari teknikoa
Binovo IT Human Project
943 569 206 <tel:943 569 206>
elacunza@xxxxxxxxx <mailto:elacunza@xxxxxxxxx>
binovo.es <//binovo.es>
Astigarragako Bidea, 2 - 2 izda. Oficina 10-11, 20180 Oiartzun
youtube <https://www.youtube.com/user/CANALBINOVO/>
linkedin <https://www.linkedin.com/company/37269706/>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx