Re: Unformatting a GFS cluster disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



christopher barry wrote:
On Fri, 2008-03-28 at 07:42 -0700, Lombard, David N wrote:
On Thu, Mar 27, 2008 at 03:26:55PM -0400, christopher barry wrote:
On Wed, 2008-03-26 at 13:58 -0700, Lombard, David N wrote:
...
Can you point me at any docs that describe how best to implement snaps
against a gfs lun?
FYI, the NetApp "snapshot" capability is a result of their "WAFL" filesystem
<http://www.google.com/search?q=netapp+wafl>.  Basically, they use a
copy-on-write mechanism that naturally maintains older versions of disk blocks.

A fun feature is that the multiple snapshots of a file have the identical
inode value

fun as in 'May you live to see interesting times' kinda fun? Or really
fun?
The former.  POSIX says that two files with the identical st_dev and
st_ino must be the *identical* file, e.g., hard links.  On a snapshot,
they could be two *versions* of a file with completely different
contents.  Google suggests that this contradiction also exists
elsewhere, such as with the virtual FS provided by ClearCase's VOB.


So, I'm trying to understand what to takeaway from this thread:
* I should not use them?
* I can use them, but having multiple snapshots introduces a risk that a
snap-restore could wipe files completely by potentially putting a
deleted file on top of a new file?
* I should use them - but not use multiples.
* something completely different ;)

Wait ! First, the "multiple snapshots sharing one inode" interpretation about WAFL is not correct. Second, there are plenty documents talking about how to do snapshots with Linux filesystems (e.g. ext3) on Netapp NOW web site where its customers can get accesses. Third, doing snapshot on GFS is easier than ext3 (since ext3 journal can be on different volume).

Will do a draft write-up as soon as I'm off my current task (sometime over this weekend).

-- Wendy
Our primary goal here is to use snapshots to enable us to backup to tape
from the snapshot over FC - and not have to pull a massive amount of
data over GbE nfs through our NAT director from one of our cluster nodes
to put it on tape. We have thought about a dedicated GbE backup network,
but would rather use the 4Gb FC fabric we've got.

If anyone can recommend a better way to accomplish that, I would love to
hear about how other people are backing up large-ish (1TB) GFS
filesystems to tape.

Regards,
-C

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux