MDS recovery with existing pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

following up on the previous thread (After hardware failure tried to recover ceph and followed instructions for recovery using OSDS), we were able to get ceph back into a healthy state (including the unfound object). Now the CephFS needs to be recovered and I'm having trouble to fully understand the docs [1] which the next steps would be. We ran the following which according to [1] sets the state to existing but failed:

ceph fs new <fs_name> <metadata_pool> <data_pool> --force --recover

But how to continue from here? Should we expect an active MDS at this point or not? Because the "ceph fs status" output still shows rank 0 as failed. We then tried:

ceph fs set <fs_name> joinable true

But apparently it was already joinable, nothing changed. Before doing anything (destructive) from the advanced options [2] I wanted to ask the community, how to get on from here. I pasted the mds logs at the bottom, I'm not really sure if the current state is expected or not. Apparently, the journal recovers but the purge_queue does not:

mds.0.41 Booting: 2: waiting for purge queue recovered
mds.0.journaler.pq(ro) _finish_probe_end write_pos = 14797504512 (header had 14789452521). recovered.
mds.0.purge_queue operator(): open complete
mds.0.purge_queue operator(): recovering write_pos
monclient: get_auth_request con 0x55c280bc5c00 auth_method 0
monclient: get_auth_request con 0x55c280ee0c00 auth_method 0
mds.0.journaler.pq(ro) _finish_read got error -2
mds.0.purge_queue _recover: Error -2 recovering write_pos
mds.0.purge_queue _go_readonly: going readonly because internal IO failed: No such file or directory
mds.0.journaler.pq(ro) set_readonly
mds.0.41 unhandled write error (2) No such file or directory, force readonly...
mds.0.cache force file system read-only
force file system read-only

Is this expected because the "--recover" flag prevents an active MDS or not? Before running "ceph mds rmfailed ..." and/or "ceph fs reset <file system name>" with the --yes-i-really-mean-it flag I'd like to ask for your input. In which case should we run those commands? The docs are not really clear to me. Any input is highly appreciated!

Thanks!
Eugen

[1] https://docs.ceph.com/en/latest/cephfs/recover-fs-after-mon-store-loss/
[2] https://docs.ceph.com/en/latest/cephfs/administration/#advanced-cephfs-admin-settings

---snip---
Dec 07 15:35:48 node02 bash[692598]: debug -90> 2023-12-07T13:35:47.730+0000 7f4cd855f700 1 mds.storage.node02.hemalk Updating MDS map to version 41 from mon.0 Dec 07 15:35:48 node02 bash[692598]: debug -89> 2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.purge_queue operator(): data pool 3 not found in OSDMap Dec 07 15:35:48 node02 bash[692598]: debug -88> 2023-12-07T13:35:47.730+0000 7f4cd855f700 5 asok(0x55c27fe86000) register_command objecter_requests hook 0x55c27fe16310 Dec 07 15:35:48 node02 bash[692598]: debug -87> 2023-12-07T13:35:47.730+0000 7f4cd855f700 10 monclient: _renew_subs Dec 07 15:35:48 node02 bash[692598]: debug -86> 2023-12-07T13:35:47.730+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -85> 2023-12-07T13:35:47.730+0000 7f4cd855f700 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201) Dec 07 15:35:48 node02 bash[692598]: debug -84> 2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.purge_queue operator(): data pool 3 not found in OSDMap Dec 07 15:35:48 node02 bash[692598]: debug -83> 2023-12-07T13:35:47.730+0000 7f4cd855f700 4 mds.0.0 apply_blocklist: killed 0, blocklisted sessions (0 blocklist entries, 0) Dec 07 15:35:48 node02 bash[692598]: debug -82> 2023-12-07T13:35:47.730+0000 7f4cd855f700 1 mds.0.41 handle_mds_map i am now mds.0.41 Dec 07 15:35:48 node02 bash[692598]: debug -81> 2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.41 handle_mds_map state change up:standby --> up:replay Dec 07 15:35:48 node02 bash[692598]: debug -80> 2023-12-07T13:35:47.734+0000 7f4cd855f700 5 mds.beacon.storage.node02.hemalk set_want_state: up:standby -> up:replay Dec 07 15:35:48 node02 bash[692598]: debug -79> 2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.41 replay_start Dec 07 15:35:48 node02 bash[692598]: debug -78> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: opening inotable Dec 07 15:35:48 node02 bash[692598]: debug -77> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -76> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: opening sessionmap Dec 07 15:35:48 node02 bash[692598]: debug -75> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -74> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: opening mds log Dec 07 15:35:48 node02 bash[692598]: debug -73> 2023-12-07T13:35:47.734+0000 7f4cd855f700 5 mds.0.log open discovering log bounds Dec 07 15:35:48 node02 bash[692598]: debug -72> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: opening purge queue (async) Dec 07 15:35:48 node02 bash[692598]: debug -71> 2023-12-07T13:35:47.734+0000 7f4cd855f700 4 mds.0.purge_queue open: opening Dec 07 15:35:48 node02 bash[692598]: debug -70> 2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.journaler.pq(ro) recover start Dec 07 15:35:48 node02 bash[692598]: debug -69> 2023-12-07T13:35:47.734+0000 7f4cd855f700 1 mds.0.journaler.pq(ro) read_head Dec 07 15:35:48 node02 bash[692598]: debug -68> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -67> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: loading open file table (async) Dec 07 15:35:48 node02 bash[692598]: debug -66> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -65> 2023-12-07T13:35:47.734+0000 7f4cd855f700 2 mds.0.41 Booting: 0: opening snap table Dec 07 15:35:48 node02 bash[692598]: debug -64> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -63> 2023-12-07T13:35:47.734+0000 7f4cd1d52700 4 mds.0.journalpointer Reading journal pointer '400.00000000' Dec 07 15:35:48 node02 bash[692598]: debug -62> 2023-12-07T13:35:47.734+0000 7f4cd1d52700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -61> 2023-12-07T13:35:47.734+0000 7f4cd4557700 2 mds.0.cache Memory usage: total 316452, rss 43088, heap 198940, baseline 198940, 0 / 0 inodes have caps, 0 caps, 0 caps per inode Dec 07 15:35:48 node02 bash[692598]: debug -60> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _renew_subs Dec 07 15:35:48 node02 bash[692598]: debug -59> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: _send_mon_message to mon.node02 at v2:10.40.99.12:3300/0 Dec 07 15:35:48 node02 bash[692598]: debug -58> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 1 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -57> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 2 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -56> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 3 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -55> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 4 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -54> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 5 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -53> 2023-12-07T13:35:47.734+0000 7f4cd855f700 10 monclient: handle_get_version_reply finishing 6 version 10835 Dec 07 15:35:48 node02 bash[692598]: debug -52> 2023-12-07T13:35:47.734+0000 7f4cdb565700 10 monclient: get_auth_request con 0x55c280bc5800 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -51> 2023-12-07T13:35:47.734+0000 7f4cdbd66700 10 monclient: get_auth_request con 0x55c280dc6800 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -50> 2023-12-07T13:35:47.734+0000 7f4cdad64700 10 monclient: get_auth_request con 0x55c280dc7800 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -49> 2023-12-07T13:35:47.734+0000 7f4cd3555700 1 mds.0.journaler.pq(ro) _finish_read_head loghead(trim 14789115904, expire 14789452521, write 14789452521, stream_format 1). probing for end of log (from 14789452521)... Dec 07 15:35:48 node02 bash[692598]: debug -48> 2023-12-07T13:35:47.734+0000 7f4cd3555700 1 mds.0.journaler.pq(ro) probing for end of the log Dec 07 15:35:48 node02 bash[692598]: debug -47> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 1 mds.0.journaler.mdlog(ro) recover start Dec 07 15:35:48 node02 bash[692598]: debug -46> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 1 mds.0.journaler.mdlog(ro) read_head Dec 07 15:35:48 node02 bash[692598]: debug -45> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Waiting for journal 0x200 to recover... Dec 07 15:35:48 node02 bash[692598]: debug -44> 2023-12-07T13:35:47.738+0000 7f4cdbd66700 10 monclient: get_auth_request con 0x55c280dc7c00 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -43> 2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro) _finish_read_head loghead(trim 1416940748800, expire 1416947000701, write 1417125359769, stream_format 1). probing for end of log (from 1417125359769)... Dec 07 15:35:48 node02 bash[692598]: debug -42> 2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro) probing for end of the log Dec 07 15:35:48 node02 bash[692598]: debug -41> 2023-12-07T13:35:47.738+0000 7f4cdb565700 10 monclient: get_auth_request con 0x55c280e2fc00 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -40> 2023-12-07T13:35:47.738+0000 7f4cdad64700 10 monclient: get_auth_request con 0x55c280ee0400 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -39> 2023-12-07T13:35:47.738+0000 7f4cd2553700 1 mds.0.journaler.mdlog(ro) _finish_probe_end write_pos = 1417129492480 (header had 1417125359769). recovered. Dec 07 15:35:48 node02 bash[692598]: debug -38> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Journal 0x200 recovered. Dec 07 15:35:48 node02 bash[692598]: debug -37> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 4 mds.0.log Recovered journal 0x200 in format 1 Dec 07 15:35:48 node02 bash[692598]: debug -36> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 2 mds.0.41 Booting: 1: loading/discovering base inodes Dec 07 15:35:48 node02 bash[692598]: debug -35> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 0 mds.0.cache creating system inode with ino:0x100 Dec 07 15:35:48 node02 bash[692598]: debug -34> 2023-12-07T13:35:47.738+0000 7f4cd1d52700 0 mds.0.cache creating system inode with ino:0x1 Dec 07 15:35:48 node02 bash[692598]: debug -33> 2023-12-07T13:35:47.742+0000 7f4cdbd66700 10 monclient: get_auth_request con 0x55c280dc7400 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -32> 2023-12-07T13:35:47.742+0000 7f4cd2553700 2 mds.0.41 Booting: 2: replaying mds log Dec 07 15:35:48 node02 bash[692598]: debug -31> 2023-12-07T13:35:47.742+0000 7f4cd2553700 2 mds.0.41 Booting: 2: waiting for purge queue recovered Dec 07 15:35:48 node02 bash[692598]: debug -30> 2023-12-07T13:35:47.742+0000 7f4cd3555700 1 mds.0.journaler.pq(ro) _finish_probe_end write_pos = 14797504512 (header had 14789452521). recovered. Dec 07 15:35:48 node02 bash[692598]: debug -29> 2023-12-07T13:35:47.742+0000 7f4cd3555700 4 mds.0.purge_queue operator(): open complete Dec 07 15:35:48 node02 bash[692598]: debug -28> 2023-12-07T13:35:47.742+0000 7f4cd3555700 4 mds.0.purge_queue operator(): recovering write_pos Dec 07 15:35:48 node02 bash[692598]: debug -27> 2023-12-07T13:35:47.742+0000 7f4cdb565700 10 monclient: get_auth_request con 0x55c280bc5c00 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -26> 2023-12-07T13:35:47.742+0000 7f4cdad64700 10 monclient: get_auth_request con 0x55c280ee0c00 auth_method 0 Dec 07 15:35:48 node02 bash[692598]: debug -25> 2023-12-07T13:35:47.746+0000 7f4cd3555700 0 mds.0.journaler.pq(ro) _finish_read got error -2 Dec 07 15:35:48 node02 bash[692598]: debug -24> 2023-12-07T13:35:47.746+0000 7f4cd3555700 -1 mds.0.purge_queue _recover: Error -2 recovering write_pos Dec 07 15:35:48 node02 bash[692598]: debug -23> 2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.purge_queue _go_readonly: going readonly because internal IO failed: No such file or directory Dec 07 15:35:48 node02 bash[692598]: debug -22> 2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.journaler.pq(ro) set_readonly Dec 07 15:35:48 node02 bash[692598]: debug -21> 2023-12-07T13:35:47.746+0000 7f4cd3555700 -1 mds.0.41 unhandled write error (2) No such file or directory, force readonly... Dec 07 15:35:48 node02 bash[692598]: debug -20> 2023-12-07T13:35:47.746+0000 7f4cd3555700 1 mds.0.cache force file system read-only Dec 07 15:35:48 node02 bash[692598]: debug -19> 2023-12-07T13:35:47.746+0000 7f4cd3555700 0 log_channel(cluster) log [WRN] : force file system read-only
---snip---


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux