The case and the question ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 Hello,
 
   Here is the case and what done till now :
 
   i have installed RHEL_ES_V3_update 4   on the two nodes and the shared can be seen from the two nodes through /dev/cciss/c0d0  i had configured each node and done with the packages needed by oracle, then
i had made two partitions on the shared each with 100M  for using as Raw1 and Raw2 for the cluster suite quorum then i had add them on the  /etc/sysconfig/rawdevices file  then  i run service rawdevices restart command and everything run ok, then i had installed RedHat cluster suite update 4  on the two nodes and configured the raws needed there .   THEN and after that i had mailed the list about the need of using the cluster suite to have a cluster running oracle RAC 10g   Then  the list has answered my question that i dont need the cluster suite.
 
  So , i have continued and installed the RedHat Global File System (GFS 6 ) update 4 on the two nodes and i have used the documentation in configuring it .
 
  i have made another two partition on the shared for use for Oracle , and i have build the configuration files for GFS and after that and before starting Oracle installation  i tried to test shutting down one of the nodes after mounting the two gfs partitions on the shared , then an error message appeared about the Lock_gulm server the it is still running   and  it still apears for a long time without shutting down even i power them down from the power botton on the server .
 
 
  i dont understand the lock_gulm or the files of ccs sonfiguration files cause i maybe bulid an error in any of them (i think the fence file cause i didnot understand it ) .
 
  i will give you the files that i have build till now , and im using two proliant ML250 servers and a shared of raid 5 disks remaing a 208.5 G of its space  ...... here is the files :
 
      the pool00.cfg  for the first partition on the shared (for using oracle) :
 
                 poolname pool00
                 minor 0 subpools 1
                 subpool 0 128 1 gfs_data
                 pooldevice 0 0 /dev/cciss/c0d0p5
 
 
      the pool01.cfg for the second partition on the shared (for using oracle) :
 
                  poolname pool01
                  minor 1 subpools 1
                  subpool 0 128 1 gfs_data
                  pooldevice 0 0 /dev/cciss/c0d0p6
 
 
     then i had done with the pool_tool command for them and everything is ok.
 
 
       then i had create a directory for the ccs files on the home of the root :
 
          /root/clu
 
  and i have put in the /root/clu  the following ccs files :
 
             cluster.ccs   :
 
                cluster {
         name = "oracluster"
         lock_gulm {
             servers = ["orat1", "orat2"]
             heartbeat_rate = 0.3
             allowed_misses = 1
         }
    }
    
          fence.ccs  :
 
           fence_devices {
         admin {
               agent = "fence_manual"
         }
     }
------------------------------------------------------------
             
          nodes.ccs :
 
                nodes {
   orat1 {
      ip_interfaces {
         eth1 = "10.0.0.2"
      }
      fence {
         human {
            admin {
               ipaddr = "10.0.0.2"
            }
         }
      }
   }
 
   orat2 {
      ip_interfaces {
         eth1 = "10.0.0.3"
      }
      fence {
         human {
            admin {
               ipaddr = "10.0.0.2"
            }
         }
      }
   }
 
}
--------------------------------------------------------------------------
        here is the files  ... i had done with them .   but i dont under stand the fence file use and is it true for my case  i had choosed to be a manual fence , but do i need another hardware to use for it or what ???????
 
 
 
       Sorry for the long  E-mail  but  just to let the reader have info about my case  correctlly.
 
 
        any answer for my case please    . ????????????????/
 
Thanks
         
 
           

Regards
-------------------------------------------------
 
Yazan
--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux