Re: Unformatting a GFS cluster disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



DRand@xxxxxxxxxxx wrote:


......The disk was previously a GFS disk and we reformatted it with exactly the same mkfs command both times. Here are more details. We are running the cluster on a Netapp SAN device.

Netapp SAN device has embedded snapshot features (and it has been the main reason of choosing NetApp SAN devices for most of the customers). It can restore your previous filesystem easily (just few commands away - go to the console, do a "snap list", find your volume that hosts the lun used for gfs, then do a "snap restore"). This gfs_edit approach (to search thru the whole device block by block) is really a brute-force way to do the restore. Unless you don't have "snap restore" license ?

-- Wendy

1) mkfs.gfs -J 1024 -j 4 -p lock_gulm -t aicluster:cmsgfs /dev/sda [100Gb device]
2) Copy lots of files to the disk
3) gfs_grow /san   [Extra 50Gb extension added to device]
4) Copy lots of files to the disk
5) mkfs.gfs -J 1024 -j 4 -p lock_gulm -t aicluster:cmsgfs /dev/sda

I have now read about resource groups and the GFS ondisk structure here..
  http://www.redhat.com/archives/cluster-devel/2006-August/msg00324.html

A couple more questions if you don't mind...

What exactly would the mkfs command have done? Would the mkfs command have overwritten the resource group headers from the previous disk structure? Or does it just wipe the superblock and journals?

If the resource group headers still exist shouldn't they have a characteristic structure we could identify enabling us to put 0xFF in only the correct places on disk?

Also is there anyway we can usefully depend on this information. Or would mkfs have wiped these special inodes too?

+ * A few special hidden inodes are contained in a GFS filesystem. They do
+ * not appear in any directories; instead, the superblock points to them
+ * using block numbers for their location.  The special inodes are:
+ *
+ *   Root inode:  Root directory of the filesystem
+ * Resource Group Index: A file containing block numbers and sizes of all RGs + * Journal Index: A file containing block numbers and sizes of all journals
+ *   Quota:  A file containing all quota information for the filesystem
+ *   License:  A file containing license information

In particular there is one 11Gb complete backup tar.gz on the disk somewhere. I'm thinking if we could write some custom utility that recognizes the gfs on disk structure and extracts very large files from it?

Damon.
Working to protect human rights worldwide

DISCLAIMER
Internet communications are not secure and therefore Amnesty International Ltd does not accept legal responsibility for the contents of this message. If you are not the intended recipient you must not disclose or rely on the information in this e-mail. Any views or opinions presented are solely those of the author and do not necessarily represent those of Amnesty International Ltd unless specifically stated. Electronic communications including email might be monitored by Amnesty International Ltd. for operational or business reasons.

This message has been scanned for viruses by Postini.
www.postini.com

------------------------------------------------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux