Re: Unformatting a GFS cluster disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2008-03-30 at 14:54 -0500, Wendy Cheng wrote:
snip...
> In general, GFS backup from Linux side during run time has been a pain, 
> mostly because of its slowness and the process has to walk thru the 
> whole filesystem to read every single file that ends up accumulating 
> non-trivial amount of cached glocks and memory. For a sizable filesystem 
> (say in TBs range like yours), past experiences have shown that after 
> backup(s), the filesystem latency can go up to an unacceptable level 
> unless its glocks are trimmed. There is a tunable specifically written 
> for this purpose (glock_purge - introduced via RHEL 4.5 ) though.

What should I be setting glock_purge to?

snip...
> The thinking here is to leverage the embedded Netapp copy-on-write 
> feature to speed up the backup process with reasonable disk space 
> requirement. The snapshot volume and the cloned lun shouldn't take much 
> disk space and we can turn on gfs readahead and glock_purge tunables 
> with minimum interruptions to the original gfs volume. The caveat here 
> is GFS-mounting the cloned lun - for one, gfs itself at this moment 
> doesn't allow mounting of multiple devices that have the same filesystem 
> identifiers (the -t value you use during mkfs time e.g. 
> "cluster-name:filesystem-name") on the same node - but it can be fixed 
> (by rewriting the filesystem ID and lock protocol - I will start to test 
> out the described backup script and a gfs kernel patch next week). Also 
> as any tape backup from linux host, you should not expect an image of 
> gfs mountable device (when retrieving from tape) - it is basically a 
> collection of all files residing on the gfs filesystem when the backup 
> events take places.
> 
> Will the above serve your need ? Maybe other folks have (other) better 
> ideas ?

This sounds exactly like what I can use - and it's got to be useful for
everyone with a NetApp and gfs. Thanks for doing this! Let me know how I
can help.



Regards,
-C

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux