Re: GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 Sorry i dont have time time thats why i mailed it in urgent if you
get strucked up anywhere mail me, sorry if you  find  any words are
mispeled.

 For GFS to work u need to install all cluster related rpms and
configure the simple running cluster with following things configured.

Cluster rpms and Deps

 rpm -ivh ccs-1.0.10-0.i686.rpm cluster-cim-0.9.1-8.i386.rpm
cluster-snmp-0.9.1-8.i386.rpm cman-1.0.17-0.i686.rpm
cman-kernel-2.6.9-50.2.i686.rpm cman-kernel-smp-2.6.9-50.2.i686.rpm
cman-kernheaders-2.6.9-50.2.i686.rpm dlm-1.0.3-1.i686.rpm
dlm-kernel-2.6.9-46.16.i686.rpm dlm-kernel-smp-2.6.9-46.16.i686.rpm
dlm-kernheaders-2.6.9-46.16.i686.rpm fence-1.32.45-1.i686.rpm
iddev-2.0.0-4.i686.rpm ipvsadm-1.24-6.i386.rpm luci-0.9.1-8.i386.rpm
magma-1.0.7-1.i686.rpm magma-devel-1.0.7-1.i686.rpm
magma-plugins-1.0.12-0.i386.rpm modcluster-0.9.1-8.i386.rpm
perl-Net-Telnet-3.03-3.noarch.rpm rgmanager-1.9.68-1.i386.rpm
system-config-cluster-1.0.45-1.0.noarch.rpm gulm-1.0.10-0.i686.rpm

GFS RPMS & DEPS

rpm -ivh cmirror-1.0.1-1.i386.rpm cmirror-kernel-2.6.9-32.0.i686.rpm
cmirror-kernel-smp-2.6.9-32.0.i686.rpm GFS-6.1.14-0.i386.rpm
GFS-kernel-2.6.9-72.2.i686.rpm GFS-kernel-smp-2.6.9-72.2.i686.rpm
GFS-kernheaders-2.6.9-72.2.i686.rpm
lvm2-cluster-2.02.21-7.el4.i386.rpm
warning: cmirror-1.0.1-1.i386.rpm:


modprobe -v gfs

system-config-cluster

1) cluster name --- apps_cluster

2)  clusternode name  --- node1
    clusternode name  ---node2

3) fencedevices

  fencedevice agent="fence_manual" name="test"

  4)  failoverdomains

 failoverdomain name="apps" ordered="0" restricted="0"

failoverdomainnode name="node1" priority="1"
failoverdomainnode name="node2" priority="1"

 Then start the cluster , once it is started up and running without error.

Then for GFS LVM is required. execute the below commands from  node1
or node 2 provided  storage lun's should be presented to both the
nodes.

Example:
1) pvcreate  /dev/sdd1
2) pvdisplay
3) vgcreate testapps /dev/sdd1
4) vgdisplay
5) lvcreate  -L 135G -n data testapps

 To format file system with GFS you need below details.

6) cman_tool  status

Cluster name: apps_cluster

apps_cluster is the name of the cluster and u can get it using the above command

:data is the logical volume name used during above lvcreate command

gfs_mkfs -p lock_dlm -t apps_cluster:data -j 7 /dev/testapps/data


Options
-p LockProtoName

-t LockTableName
    The lock table field appropriate to the lock module you're using.
It is clustername:fsname. Clustername must match that in cluster.conf;

-j

Specifies the number of journals to be created by the gfs_mkfs
command. One journal is required for each node that mounts the file
system. (More journals than are needed can be specified at creation
time to allow for future expansion.)



After that u can mount it to ur desired mount point in our case we
created /data-new using mkdir

mount -t gfs /dev/testapps/data  /data-new/




2008/10/8 Ryan Golhar <golharam@xxxxxxxxx>:
> Has anyone successfully setup GFS?  I have SAN connected to several
> computers by fibre, and it appears that GFS is the way to go as opposed to
> use an NFS server.
>
> Do I really need to set up all the other aspects of a Redhat cluster to get
> GFS to work?  There doesn't seem to be a good HOW-TO of this anywhere, and
> the RedHat docs are not as helpful as I would have liked.
>
>
> --
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list
>

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux