shared storage clustering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

I am having two shared volumes (GFS) one for postgres and other to save files, so whenever i upload some files Informations related to the files will be updated in the postgres volume and actual file will be cpied to the Storage volumes. Now database and file storage will be accessible thorough a shared mechanism.

I had configured the failover cluster in such a way that either node1 or node2 takes control when one of
          the node fails (Active /Passive clustering).

          My resource and service section of cluster.conf
<resources> <script file="/etc/init.d/postgresql" name="Postgresql"/> <clusterfs device="/dev/pgsqlvg/datalv" fstype="gfs" mountpoint="/pgsql" name="Postgres" options=""/> <script file="/etc/init.d/vsftpd" name="Vsftpd"/> <clusterfs device="/dev/pgsqlvg/storage" fstype="gfs" mountpoint="/home/ftpuser" name="Storage" options=""/>
                       <ip address="151.8.18.147" monitor_link="1"/>
               </resources>
               <service autostart="1" domain="postgres" name="Postgres">
                       <script ref="Postgresql"/>
                       <clusterfs ref="Postgres"/>
                       <script ref="Vsftpd"/>
                       <clusterfs ref="Storage"/>
                       <ip ref="151.8.18.147"/>
               </service>

           Two shared volumes:
*device="/dev/pgsqlvg/datalv" for postgres (<script file="/etc/init.d/postgresql" name="Postgresql"/> for
                             database)
device="/dev/pgsqlvg/storage" for storing files ( <script file="/etc/init.d/vsftpd" name="Vsftpd"/>) for
                             loading files to storage.*

When ever a node takes control two volumes had to be mounted and appropriate service for corresponding volumes had to be started and this cluster had to be accesible by an virtual ip ( <*ip address="151.8.18.147"
              monitor_link="1"/>.*

For such situation whether my current conf is enough( *adding all resources to a single service*) or is there any actual configurations or appropriate procedures to achieve these types of settings inorder to avoid single point of
            failure.
Thanks for the help in advance

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux