RE: MySQL Failover / Failback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Jeff Stoner
Sent: Friday, June 01, 2007 1:21 PM
To: linux clustering
Subject: RE: MySQL Failover / Failback

Sounds like you've got several things happening all at once. If you are not using MySQL Cluster, then you will probably have an active/passive setup, in which MySQL will be running on only one node. If you are using MySQL Cluster, why are you using Redhat Cluster?
 
Yes, it is using mysql replication NOT mysql cluster. It is active / passive. One node is rw (master) the other is ro (slave).
 
Replication? Are you referring to MySQL Replication? What is replicating where? Are the slaves a part of the Redhat Cluster? If you simply mean will replication "break" if MySQL fails over then no. Replication on the slave will retry connecting to the master (according to the connection retry settings in MySQL.) Also, you must use the Redhat Cluster-controlled IP when establishing replication and not the IP of any particular node (for obvious reasons.)
 
For my MySQL databases built on Redhat Cluster, I specify my service as follows:
 
<service autostart="1" domain="mysql-fail-domain" name="mysql5-service">
   <ip ref="10.2.2.2"/>
   <fs ref="mysqlfs"/>
   <script ref="mysqld"/>
</service>
The filesystem resource is a slice of SAN accessible by all nodes in the cluster. The script is a (modified) /etc/init.d/mysqld script.
 
 
What does your resource section look like for the ip address? I keep getting the following errors (posted in another message)
 

--Jeff

Service Engineer
OpSource Inc.

"The Message of the Day is /etc/motd"

 



I am curious if anyone knows the best practices for this? Several use cases include
 
Note: We are choosing to use a vip for the two nodes to make the failover change transparent to the application side.
 
1) Node 1 (master) dies
        -How do we enable "sticky" failover so that it does not then fail back to Node 1
        -Is Node 2 active all the time or is the service completely shut off? And if its off, how would replication happen?
        -How do failover domains work in this case?
2) Node 2 (Master) Node 1 recovered
        -How does replication continue again?
        -How does the master slave relationship change? Is this automated, or does it require manual intervention? Should we be using DRDB?
3) Node 1 (master) Node 2 (slave) - network connectivity dies on node 1
        -There is an IP resource available, but how does this monitor and handle failover?
        -How can I move the vip in the event of a failure? Do I need to manually script this?
 
With the vip failover, do I attach the vip resource to the mysql resource in the failover domain for those two nodes? What happens if I do this?
 
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux