Re: gfs2-utils source for recovery purpose of a corrupt gfs2 partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bob,

thanks for prompt reply!

the fs originally was 12.4TB (6TB used) big.
After a resize attempt to 25TB by gfs2_grow (very very old version gfs2-utils 1.62)
The fs was expand and the first impression looks good as df reported the size of 25TB.
But looking from the second node to the fs (two nod system) ls -r and ls -R throws
IO errors and gfs2 mount get frozen (reboot of machine was performed).
As no shrinking of gfs2 was possible to rollback, the additional physical volume was 
removed from the logical volume (lvresize to org. size & pvremove).
This hard cut of the gsf2 unfenced partition should be hopefully repaired by the
fsck.gfs2 (newest version), this was my thought.
Even if this will not be the case, I could not run the fsck.gfs2 due to a 
"of memory in compute_rgrp_layout" message.

see strace output:

write(1, "9098813: start: 4769970307031 (0"..., 739098813: start: 4769970307031 (0x4569862bfd7), length = 524241 (0x7ffd1)
) = 73
write(1, "9098814: start: 4769970831272 (0"..., 739098814: start: 4769970831272 (0x456986abfa8), length = 524241 (0x7ffd1)
) = 73
write(1, "9098815: start: 4769971355513 (0"..., 739098815: start: 4769971355513 (0x4569872bf79), length = 524241 (0x7ffd1)
) = 73
write(1, "9098816: start: 4769971879754 (0"..., 739098816: start: 4769971879754 (0x456987abf4a), length = 524241 (0x7ffd1)
) = 73
write(1, "9098817: start: 4769972403995 (0"..., 739098817: start: 4769972403995 (0x4569882bf1b), length = 524241 (0x7ffd1)
) = 73
brk(0xb7dea000)                         = 0xb7dc9000
mmap2(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap2(NULL, 1048576, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
write(2, "Out of memory in compute_rgrp_la"..., 37Out of memory in compute_rgrp_layout
) = 37
exit_group(-1)                          = ?

As I had already increased my swapspace 

swapon -s
Filename                                Type            Size    Used    Priority
/dev/sda3                               partition       8385920 0       -3
/var/swapfile.bin                       file            33554424        144     1
 
and run again the same situation as before I decide to start to extract the lost files by a c prog.

Now I have create a big Image (7TB) on a xfs partition and would like to recover my files of interest
by a program using libgfs2 or part of the source from gfs2-utils, as mentioned in my previous posting.
 
As I see nearly all of the files located in the dir structure and get the position in the image by
a simple string command, I hope to extract them in a simpler way.

The RG size was set to the Max value of 2GB end each file I'm looking for is about 250BM big.
The amount of files to be recovered is more then 16k.
Every file have a header with his file name ant the total size, so it should be easy to check if the
recovery of it is successful.

So thats my theory, but this could be a easter vacation project without the right knowledge of gfs2.
As I'm lucky to have the gfs2-utils source I hope it could be done.
But if there is a simpler way to do a recovery by the installed gfs2 progs like gfs2_edit or gfs2_tool
or other tools it would be nice if someone could show my the proper way.


Many Thanks in advance

Markus 
 
-- 
*******************************************************
Markus Wolfgart
DLR Oberpfaffenhofen
German Remote Sensing Data Center
.
.
.
e-mail: markus.wolfgart@xxxxxx
**********************************************************

 
----- "Markus Wolfgart" <markus wolfgart dlr de> wrote:
| Hallo Cluster and GFS Experts,
| 
| I'm a new  subscriber of this mailing list and appologise
| in the case my posting is offtopic.
| 
| I'm looking for help concerning a corrupt gfs2 file system
| which could not be recovered by me by fsck.gfs2 (Ver. 3.0.9)
| due to to less less physical memory (4GB) eaven if increasing it
| by a additional swap space (now about 35GB).
| 
| I would like to parse a image created of the lost fs (the first 6TB)
| with the code provided in the new gfs2-utils release.
| 
| Due to this circumstance I hope to find in this mailing list some
| hints
| concerning an automated step by step recovery of lost data.
| 
| Many Thanks in advance for your help
| 
| Markus

Hi Markus,

You said that fsck.gfs2 is not working but you did not say what
messages it gives you when you try.  This must be a very big
file system.  How big is it?  Was it converted from gfs1?

Regards,

Bob Peterson
Red Hat File Systems

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux