RE: Backup GFS File system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are using rsync to syncronise certain directories from the GFS file systems as we don’t need a full clone of the disks.  In the past we have done this on to an ext3 file system but this obviously has the limitation that it can’t be used by the original cluster if the primary storage fails.

Yes we are planning to do the backup from another server outside of the cluster.

Thanks for the info.

Ben


> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of
> rhurst@xxxxxxxxxxxxxxxxx
> Sent: 22 May 2007 15:36
> To: linux-cluster@xxxxxxxxxx
> Subject: Re:  Backup GFS File system
> 
> Are you cloning the disk(s)?  If so, are you backing up the volumes on another server, outside of the
> cluster?  That is our implementation, and we came up with a process that allows for that.  You need to
> install the GFS/CS components on that server, but do not need to run it (ccs / cman / fence, etc).  I
> marked the volumes with the lock_nolock protocol for mounting, although there is a way to use mount
> arguments to override what is on the volume (man gfs_mount).
> 
> Here's a snippet of a script we execute on the backup media server, after the clone is completed:
> 
> 
> 
> CLONE_NAME="watson-clone"
> VG="/dev/VGWATSON"
> 
> for RETRY in `seq 3 -1 0`; do
>         DEVICE="`sudo ${POWERMT} display dev=all | grep -B 3 ${CLONE_NAME} | grep name=emcpower | awk
> -F= '{print $2}'`"
>         [ -n "${DEVICE}" ] && break;
>         sleep 5
> done
> if [ -z "${DEVICE}" ]; then
>         echo "NO PowerPath device found for ${CLONE_NAME}"
>         exit -1
> fi
> sudo ${PARTED} /dev/${DEVICE} set 1 lvm on
> sudo ${VGRENAME} ${VG} ${VG}-CLONE 2> /dev/null
> sudo ${VGCHANGE} -a y --ignorelockingfailure ${VG}-CLONE 2> /dev/null
> sudo ${GFS_TOOL} sb ${VG}-CLONE/lvol0 proto lock_nolock <<-EOD
> y
> EOD
> sudo ${GFS_TOOL} sb ${VG}-CLONE/lvol1 proto lock_nolock <<-EOD
> y
> EOD
> sudo mount -t gfs ${VG}-CLONE/lvol0 /bcv/ccc/watson
> sudo mount -t gfs ${VG}-CLONE/lvol1 /bcv/ccc/watson/wav
> sudo mount -t ext3 ${VG}-CLONE/lvoldata /bcv/ccc/watson-data
> sudo mount -t ext3 ${VG}-CLONE/lvoldb1 /bcv/ccc/watson-data/sys/db1
> 
> 
> On Tue, 2007-05-22 at 14:56 +0100, Ben Yarwood wrote:
> 
> 
> 	I intend to create a backup of my GFS6.1 file systems (3 Node cluster) on a single backup
> machine and wanted to check some facts.
> 
> 
> 	1.  To run a GFS filesystem with nolock as the lock protocol, do I need the rest of the cluster
> infrastructure?
> 
> 
> 	2.  If I do need the rest of the cluster architecture, can you have a one node cluster?
> 
> 
> 	3.  In the event of the primary storage failing and the backup being used, I can convert the
> lock protocol to dlm using gfs_tool
> 	e.g. gfs_tool sd /dev/sdx proto lock_dlm ?
> 
> 
> 	Thanks
> 	Ben
> 
> 
> 
> 
> 
> 
> 	--
> 	Linux-cluster mailing list
> 	Linux-cluster@xxxxxxxxxx
> 	https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> Robert Hurst, Sr. Caché Administrator
> Beth Israel Deaconess Medical Center
> 1135 Tremont Street, REN-7
> Boston, Massachusetts   02120-2140
> 617-754-8754 ∙ Fax: 617-754-8730 ∙ Cell: 401-787-3154
> Any technology distinguishable from magic is insufficiently advanced.




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux