Re: One mds daemon damaged, filesystem is offline. How to recover?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 On Saturday, May 22, 2021, 03:14:13 PM GMT+8, Eugen Block <eblock@xxxxxx> wrote:
 
 What does the MDS report in its logs from when it went down?

NOTE: Power failure happened somewhere 2021-05-20 23:56:
Here are log messages from MDS.0 log:
2021-05-20 17:26:19.358 2192d80  1 mds.a Updating MDS map to version 8746 from mon.0
2021-05-20 23:56:43.129 1364480  0 set uid:gid to 167:167 (ceph:ceph)
2021-05-20 23:56:43.129 1364480  0 ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable), process ceph-mds, pid 1624
2021-05-20 23:56:43.255 1fd6d80  1 mds.a Updating MDS map to version 8747 from mon.0
2021-05-20 23:56:47.327 1fd6d80  1 mds.a Updating MDS map to version 8748 from mon.0
2021-05-20 23:56:47.327 1fd6d80  1 mds.a Monitors have assigned me to become a standby.
2021-05-20 23:56:47.344 1fd6d80  1 mds.a Updating MDS map to version 8749 from mon.0
2021-05-20 23:56:47.658 1fd6d80  1 mds.0.8749 handle_mds_map i am now mds.0.8749
2021-05-20 23:56:47.689 1fd6d80  1 mds.0.8749 handle_mds_map state change up:boot --> up:replay
2021-05-20 23:56:47.689 1fd6d80  1 mds.0.8749 replay_start
2021-05-20 23:56:47.689 1fd6d80  1 mds.0.8749  recovery set is
2021-05-20 23:56:47.689 1fd6d80  1 mds.0.8749  waiting for osdmap 6958 (which blacklists prior instance)
2021-05-20 23:56:48.165 2228880  0 mds.0.cache creating system inode with ino:0x100
2021-05-20 23:56:48.177 2228880  0 mds.0.cache creating system inode with ino:0x1
2021-05-20 23:56:52.223 2227200  0 mds.0.journaler.mdlog(ro) _finish_read got less than expected (1555896)
2021-05-20 23:56:52.223 2229180  0 mds.0.log _replay journaler got error -22, aborting
2021-05-20 23:56:52.223 2229180 -1 log_channel(cluster) log [ERR] : Error loading MDS rank 0: (22) Invalid argument
2021-05-20 23:56:52.224 2229180  1 mds.a respawn!
--- begin dump of recent events ---

Sagara
  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux