Re: Backup strategies for large-ish GFS2 filesystems.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 10, 2009 at 03:03:48AM -0800, jr wrote:
> Hello Ray,
> unfortunately we only have a very small gfs volume running, but how are
> you doing backups? Are you doing snapshots and mounting them with
> lockproto=lock_nolock?
> regards,
> Johannes

That would be ideal -- unfortunately our underlying storage hardware
(IBM DS4300/FASt600) does not support snapshots.  If cLVM supported
snapshots I'd jump on going that route in a millisecond... :)

We've tried three methods (1) NetBackup to exposed NFS export of GFS2
filesystem; (2) rsync from remote machine to rsyncd on GFS2 node; (3)
rsync from remote machine to NFS export of GFS2 filesystem.

Option 1 is the slowest (6+ hours), 2 is somewhat better (3 hours) and
3 has been our best bet so far (82 minutes).  This is using the
--size-only argument to rsycn in an effort to avoid reading mtime on an
inode.  Probably not much gain though as it appears stat() is called
anyways.

I'm kind of surprised that rsync to NFS is faster than rsync --daemon.

I have been testing with our GFS2 filesystem mounted in spectator mode
on the passive node, but I don't think it's really making much
difference.

It would be nice if GFS2 had some backup-friendly type options for
caching some of this information about all our inodes.  I mean,
obviously it does -- but some knobs we could easily turn on a node
we intend to run backups from that, given ample amount of memory, cache
all the stat() information for 24+ hour periods...

Or maybe some cluster filesystem friendly backup tools as I see these
problems exist on OCFS2 and Lustre as well... 

Thanks for the reply.

> 
> Am Mittwoch, den 09.12.2009, 11:08 -0800 schrieb Ray Van Dolson:
> > How do those of you with large-ish GFS2 filesystems (and multiple
> > nodes) handle backups?  I'm specifically thinking of people running
> > mailspools and such with many files.
> > 
> > I'd be interested in hearing your space usage, inode usage and how long
> > it takes you to do a full and diff backup to see if the numbers we're
> > seeing are reasonable.
> > 
> > Thanks!
> > Ray
> > 

Ray

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux