question about rebooting master server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



so I have both servers tf1, and tf2 connected to shared storage Dell 220S with 6.0.2.
They both seem to access the shared storage fine, but if I reboot the node thats the master,
the slave cannot access the shared storage until the master comes back up..
heres the info from the logs.

(reboot of tf2, and this is the log on tf1)
May 13 14:55:29 tf1 heartbeat: [5333]: info: local resource transition completed.
May 13 14:55:35 tf1 kernel: lock_gulm: Checking for journals for node "tf2.localdomain"
May 13 14:55:35 tf1 lock_gulmd_core[5007]: Master Node has logged out. 
May 13 14:55:35 tf1 kernel: lock_gulm: Checking for journals for node "tf2.localdomain"
May 13 14:55:36 tf1 lock_gulmd_core[5007]: I see no Masters, So I am Arbitrating until enough Slaves talk 
to me. 
May 13 14:55:36 tf1 lock_gulmd_core[5007]: Could not send quorum update to slave tf1.localdomain 
May 13 14:55:36 tf1 lock_gulmd_LTPX[5014]: New Master at tf1.localdomain:192.168.1.5 
May 13 14:55:57 tf1 lock_gulmd_core[5007]: Timeout (15000000) on fd:6 (tf2.localdomain:192.168.1.6) 
May 13 14:56:32 tf1 last message repeated 2 times
May 13 14:57:40 tf1 last message repeated 4 times
May 13 14:58:31 tf1 last message repeated 3 times
May 13 14:58:45 tf1 lock_gulmd_core[5007]: Now have Slave quorum, going full Master. 
May 13 14:58:45 tf1 lock_gulmd_core[5007]: New Client: idx:2 fd:6 from (192.168.1.6:tf2.localdomain) 
May 13 14:58:45 tf1 lock_gulmd_LT000[5010]: New Client: idx 2 fd 7 from (192.168.1.5:tf1.localdomain) 
May 13 14:58:45 tf1 lock_gulmd_LTPX[5014]: Logged into LT000 at tf1.localdomain:192.168.1.5 
May 13 14:58:45 tf1 lock_gulmd_LTPX[5014]: Finished resending to LT000 
May 13 14:58:46 tf1 lock_gulmd_LT000[5010]: Attached slave tf2.localdomain:192.168.1.6 idx:3 fd:8 (soff:3 
connected:0x8) 
May 13 14:58:46 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Trying to acquire journal lock...
May 13 14:58:46 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Looking at journal...
May 13 14:58:47 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Done
May 13 14:58:47 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Trying to acquire journal lock...
May 13 14:58:47 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Busy
May 13 14:58:47 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Trying to acquire journal lock...
May 13 14:58:47 tf1 kernel: GFS: fsid=progressive:gfs1.0: jid=1: Busy
May 13 14:58:47 tf1 lock_gulmd_LT000[5010]: New Client: idx 4 fd 9 from (192.168.1.6:tf2.localdomain) 

is this normal? I would assume that when the master was rebooted, the other node should still be able to 
access the storage with no problems.

regards,
Jason




--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux