MDS has been repeatedly "laggy or crashed"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all.
I have 3 server(use ceph 0.56.6): 
1 server user for Mon & mds.0
1 server run OSD deamon ( Raid 6 (44TB) = OSD.0 ) & mds.1
1 server run OSD daemon ( Raid 6 (44TB) = OSD.1 ) & mds.2

When running ceph system, MDSs has been repeatedly ''laggy or crashed", 2 times in 1 minute, and then, MDS reconnect and come back "active".

I set max mds active = 1 ==> This error is generated
I set max mds active = 2 ==> This error is generated

2013-06-02 09:34:11.221201 mon.0 [INF] pgmap v73471: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 11306B/s wr, 2op/s
2013-06-02 09:34:12.633876 mon.0 [INF] mdsmap e474: 2/2/2 up {0=0=up:resolve,1=1=up:active}, 1 up:standby
2013-06-02 09:34:12.634289 mon.0 [INF] mds.0 10.160.0.70:6805/9786 up:resolve
2013-06-02 09:34:13.018398 mon.0 [INF] mdsmap e475: 2/2/2 up {0=0=up:reconnect,1=1=up:active}, 1 up:standby
2013-06-02 09:34:13.018525 mon.0 [INF] mds.0 10.160.0.70:6805/9786 up:reconnect
2013-06-02 09:34:13.550723 mon.0 [INF] pgmap v73472: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail
2013-06-02 09:34:13.831181 mon.0 [INF] mdsmap e476: 2/2/2 up {0=0=up:rejoin,1=1=up:active}, 1 up:standby
2013-06-02 09:34:13.831363 mon.0 [INF] mds.0 10.160.0.70:6805/9786 up:rejoin
2013-06-02 09:34:14.065637 mon.0 [INF] mdsmap e477: 2/2/2 up {0=0=up:active,1=1=up:active}, 1 up:standby
2013-06-02 09:34:14.065846 mon.0 [INF] mds.0 10.160.0.70:6805/9786 up:active
2013-06-02 09:34:15.945364 mon.0 [INF] pgmap v73473: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 3842B/s rd, 13662B/s wr, 6op/s
2013-06-02 09:34:17.815203 mon.0 [INF] pgmap v73474: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 12789KB/s wr, 3op/s
2013-06-02 09:34:21.005787 mon.0 [INF] pgmap v73475: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 27942KB/s wr, 10op/s
2013-06-02 09:34:22.967432 mon.0 [INF] pgmap v73476: 576 pgs: 576 active+clean; 10667 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 55391KB/s wr, 13op/s
2013-06-02 09:34:26.489249 mon.0 [INF] pgmap v73477: 576 pgs: 576 active+clean; 10668 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 37260KB/s wr, 11op/s
2013-06-02 09:36:27.059220 mon.0 [INF] pgmap v73478: 576 pgs: 576 active+clean; 10668 GB data, 21474 GB used, 60471 GB / 81946 GB avail; 46878KB/s wr, 11op/s
2013-06-02 09:36:27.579043 mon.0 [INF] pgmap v73479: 576 pgs: 576 active+clean; 10668 GB data, 21474 GB used, 60471 GB / 81946 GB avail
2013-06-02 09:36:27.579291 mon.0 [INF] osdmap e166: 2 osds: 2 up, 2 in
2013-06-02 09:36:27.739135 mon.0 [INF] mdsmap e478: 2/2/2 up {0=2=up:replay(laggy or crashed),1=1=up:active}
2013-06-02 09:36:28.342075 mon.0 [INF] mdsmap e479: 2/2/2 up {0=2=up:replay(laggy or crashed),1=1=up:active}, 1 up:standby
2013-06-02 09:36:28.342318 mon.0 [INF] mds.? 10.160.0.70:6806/9826 up:boot
2013-06-02 09:36:28.691609 mon.0 [INF] pgmap v73480: 576 pgs: 576 active+clean; 10668 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 698MB/s wr, 202op/s
2013-06-02 09:36:29.856063 mon.0 [INF] mdsmap e480: 2/2/2 up {0=2=up:replay,1=1=up:active}, 1 up:standby
2013-06-02 09:36:29.856197 mon.0 [INF] mds.0 10.160.0.74:6806/10975 up:replay
2013-06-02 09:36:30.333710 mon.0 [INF] pgmap v73481: 576 pgs: 576 active+clean; 10668 GB data, 21476 GB used, 60469 GB / 81946 GB avail
2013-06-02 09:36:32.391503 mon.0 [INF] pgmap v73482: 576 pgs: 576 active+clean; 10668 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 0B/s wr, 0op/s
2013-06-02 09:36:33.965265 mon.0 [INF] pgmap v73483: 576 pgs: 576 active+clean; 10668 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 6384KB/s wr, 1op/s
2013-06-02 09:36:35.344559 mon.0 [INF] mdsmap e481: 2/2/2 up {0=2=up:resolve,1=1=up:active}, 1 up:standby
2013-06-02 09:36:35.344973 mon.0 [INF] mds.0 10.160.0.74:6806/10975 up:resolve
2013-06-02 09:36:36.981035 mon.0 [INF] pgmap v73484: 576 pgs: 576 active+clean; 10668 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 22679KB/s wr, 10op/s
2013-06-02 09:36:38.930508 mon.0 [INF] pgmap v73485: 576 pgs: 576 active+clean; 10669 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 34007KB/s wr, 8op/s
2013-06-02 09:36:39.497057 mon.0 [INF] mdsmap e482: 2/2/2 up {0=2=up:reconnect,1=1=up:active}, 1 up:standby
2013-06-02 09:36:39.497173 mon.0 [INF] mds.0 10.160.0.74:6806/10975 up:reconnect
2013-06-02 09:36:39.623653 mon.0 [INF] mdsmap e483: 2/2/2 up {0=2=up:rejoin,1=1=up:active}, 1 up:standby
2013-06-02 09:36:39.623778 mon.0 [INF] mds.0 10.160.0.74:6806/10975 up:rejoin
2013-06-02 09:36:39.783063 mon.0 [INF] mdsmap e484: 2/2/2 up {0=2=up:active,1=1=up:active}, 1 up:standby
2013-06-02 09:36:39.783187 mon.0 [INF] mds.0 10.160.0.74:6806/10975 up:active
2013-06-02 09:36:41.973938 mon.0 [INF] pgmap v73486: 576 pgs: 576 active+clean; 10669 GB data, 21476 GB used, 60469 GB / 81946 GB avail; 2997B/s rd, 10672KB/s wr, 7op/s


Log:
2013-06-02 09:43:36.206136 7f8bb75fe700  1 mds.1.9  recovery set is 0
2013-06-02 09:43:36.206156 7f8bb75fe700  1 mds.1.9  need osdmap epoch 170, have 169
2013-06-02 09:43:36.206162 7f8bb75fe700  1 mds.1.9  waiting for osdmap 170 (which blacklists prior instance)
2013-06-02 09:43:36.206355 7f8bb75fe700  1 mds.1.cache handle_mds_failure mds.1 : recovery peers are 0
2013-06-02 09:43:36.208571 7f8bb75fe700  0 mds.1.9 handle_mds_beacon no longer laggy
2013-06-02 09:43:36.216766 7f8bb75fe700  0 mds.1.9 ms_handle_connect on 10.160.0.71:6801/5149
2013-06-02 09:43:36.464403 7f8bb75fe700  0 mds.1.9 ms_handle_connect on 10.160.0.74:6800/5239
2013-06-02 09:43:36.490680 7f8bb75fe700  0 mds.1.cache creating system inode with ino:101
2013-06-02 09:43:36.490995 7f8bb75fe700  0 mds.1.cache creating system inode with ino:1
2013-06-02 09:43:53.105530 7f8b8ffff700  1 mds.1.9 replay_done
2013-06-02 09:43:53.105594 7f8b8ffff700  1 mds.1.9 making mds journal writeable
2013-06-02 09:44:00.082234 7f8bb75fe700  1 mds.1.9 handle_mds_map i am now mds.1.9
2013-06-02 09:44:00.082244 7f8bb75fe700  1 mds.1.9 handle_mds_map state change up:replay --> up:resolve
2013-06-02 09:44:00.082249 7f8bb75fe700  1 mds.1.9 resolve_start
2013-06-02 09:44:00.082253 7f8bb75fe700  1 mds.1.9 reopen_log
2013-06-02 09:44:00.082283 7f8bb75fe700  1 mds.1.9  recovery set is 0
2013-06-02 09:44:00.084820 7f8b8f4fd700  0 -- 10.160.0.70:6807/10232 >> 10.160.0.71:6832/17825 pipe(0x7f8b876eea10 sd=122 :6807 s=0 pgs=0 cs=0 l=0).accept connect_seq 0 vs existing 0 state connecting
2013-06-02 09:44:00.246788 7f8bb75fe700  1 mds.1.9 resolve_done
2013-06-02 09:44:00.886678 7f8bb75fe700  1 mds.1.9 handle_mds_map i am now mds.1.9
2013-06-02 09:44:00.886687 7f8bb75fe700  1 mds.1.9 handle_mds_map state change up:resolve --> up:reconnect
2013-06-02 09:44:00.886704 7f8bb75fe700  1 mds.1.9 reconnect_start
2013-06-02 09:44:00.886725 7f8bb75fe700  1 mds.1.server reconnect_clients -- 1 sessions
2013-06-02 09:44:00.891295 7f8b8f5fe700  0 -- 10.160.0.70:6807/10232 >> 10.160.0.78:0/2891255972 pipe(0x7f8b87aa7ab0 sd=123 :6807 s=0 pgs=0 cs=0 l=0).accept peer addr is really 10.160.0.78:0/2891255972 (socket is 10.160.0.78:58726/0)
2013-06-02 09:44:00.907797 7f8bb75fe700  0 log [DBG] : reconnect by client.4710 10.160.0.78:0/2891255972 after 0.020951
2013-06-02 09:44:00.907823 7f8bb75fe700  0 mds.1.server non-auth 100006b93cd #1/test19, will pass off to authority
2013-06-02 09:44:00.929781 7f8bb75fe700  1 mds.1.9 reconnect_done
2013-06-02 09:44:01.514894 7f8bb75fe700  1 mds.1.9 handle_mds_map i am now mds.1.9
2013-06-02 09:44:01.514902 7f8bb75fe700  1 mds.1.9 handle_mds_map state change up:reconnect --> up:rejoin
2013-06-02 09:44:01.514910 7f8bb75fe700  1 mds.1.9 rejoin_joint_start
2013-06-02 09:44:01.893233 7f8bb75fe700  1 mds.1.9 rejoin_done
2013-06-02 09:44:02.251467 7f8bb75fe700  1 mds.1.9 handle_mds_map i am now mds.1.9
2013-06-02 09:44:02.251476 7f8bb75fe700  1 mds.1.9 handle_mds_map state change up:rejoin --> up:active
2013-06-02 09:44:02.251484 7f8bb75fe700  1 mds.1.9 recovery_done -- successful recovery!
2013-06-02 09:44:02.310016 7f8bb75fe700  1 mds.1.9 active_start
2013-06-02 09:44:02.310485 7f8bb75fe700  1 mds.1.9 cluster recovered.
2013-06-02 09:44:14.783358 7f8bb75fe700  0 mds.1.9 handle_mds_beacon no longer laggy
2013-06-02 09:44:59.721979 7f8389218700  0 mds.-1.0 ms_handle_connect on 10.160.0.70:6789/0
2013-06-02 09:45:00.140690 7f8389218700  1 mds.-1.0 handle_mds_map standby
2013-06-02 09:45:06.290908 7f8389218700  1 mds.-1.0 handle_mds_map standby
2013-06-02 09:45:06.566650 7f8389218700  1 mds.-1.0 handle_mds_map standby
2013-06-02 09:45:06.816915 7f8389218700  1 mds.-1.0 handle_mds_map standby
2013-06-02 09:45:07.287924 7f8389218700  1 mds.-1.0 handle_mds_map standby
2013-06-02 09:45:15.034700 7f8389218700  1 mds.-1.0 handle_mds_map standby

How to fix it?

Thanks
--
BT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux