Re: How to take down a CS/GFS setup with minimum downtime

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for this Lon, I'm down to the last two node members and according
to cman_tool status I have two nodes, two votes and a quorum of two.
--
Nodes: 2
Expected_votes: 5
Total_votes: 2
Quorum: 2   
--

One of those nodes has the GFS filesystems mounted.
If I issue cman_tool leave remove on the other node will I run into any
problems on the GFS mounted node ? (for example, due to quorum)



On Mon, 2007-10-29 at 10:56 -0400, Lon Hohberger wrote:

> That should do it, yes.  Leave remove is supposed to decrement the
> quorum count, meaning you can go from 5..1 nodes if done correctly.  You
> can verify that the expected votes count decreases with each removal
> using 'cman_tool status'.
> 
> 
> If for some reason the above doesn't work, the alternative looks
> something like this:
>   * unmount the GFS volume + stop cluster on all nodes
>   * use gfs_tool to alter the lock proto to nolock
>   * mount on node 1.  copy out data.  unmount!
>   * mount on node 2.  copy out data.  unmount!
>   * ...
>   * mount on node 5.  copy out data.  unmount!
> 
> -- Lon
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

Attachment: smime.p7s
Description: S/MIME cryptographic signature

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux