Re: Re: GFS2 corruption/withdrawal/crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, I just ran fsck.gfs2 against this filesystem twice with a 10-minute 
pause between them. As such:
# fsck -C -t gfs2 -y /dev/mapper/VGIMG0-LVIMG0

Output of second run:
fsck 1.39 (29-May-2006)
Initializing fsck
Recovering journals (this may take a while)...
Journal recovery complete.
Validating Resource Group index.
Level 1 RG check.
(level 1 passed)
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Unlinked block found at block 37974707 (0x24372b3), left unchanged.
..snip about 30 total of these..
Unlinked block found at block 96603710 (0x5c20e3e), left unchanged.
Pass5 complete
Writing changes to disk
gfs2_fsck complete

When it was done I remounted the filesystem and tried to "rm -rf /raid1/bad"
which is a subdir in the root of this filesystem that contains the zero-byte
file that was the focal point of this grief to start with. 

Results:

Aug  8 19:09:43 server1 kernel: GFS2: fsid=clustname:raid1.0: fatal: invalid metad
ata block
Aug  8 19:09:43 server1 kernel: GFS2: fsid=clustname:raid1.0:   bh = 1633350398 (m
agic number)
Aug  8 19:09:43 server1 kernel: GFS2: fsid=clustname:raid1.0:   function = gfs2_me
ta_indirect_buffer, file = /builddir/build/BUILD/gfs2-kmod-1.92/_kmod_build_xen/
meta_io.c, line = 334
Aug  8 19:09:43 server1 kernel: GFS2: fsid=clustname:raid1.0: about to withdraw th
is file system
Aug  8 19:09:43 server1 kernel: GFS2: fsid=clustname:raid1.0: telling LM to withdr
aw
Aug  8 19:09:44 server1 kernel: GFS2: fsid=clustname:raid1.0: withdrawn
Aug  8 19:09:44 server1 kernel:
Aug  8 19:09:44 server1 kernel: Call Trace:
Aug  8 19:09:44 server1 kernel:  [<ffffffff8854b91a>] :gfs2:gfs2_lm_withdraw+0xc1/0xd0
Aug  8 19:09:44 server1 kernel:  [<ffffffff80262907>] __wait_on_bit+0x60/0x6e
Aug  8 19:09:44 server1 kernel:  [<ffffffff80215780>] sync_buffer+0x0/0x3f
Aug  8 19:09:44 server1 kernel:  [<ffffffff80262981>] out_of_line_wait_on_bit+0x6c/0x78
Aug  8 19:09:44 server1 kernel:  [<ffffffff8029a016>] wake_bit_function+0x0/0x23
Aug  8 19:09:44 server1 kernel:  [<ffffffff8021a7f6>] submit_bh+0x10a/0x111
Aug  8 19:09:44 server1 kernel:  [<ffffffff8855d627>] :gfs2:gfs2_meta_check_ii
+0x2c/0x38
Aug  8 19:09:44 server1 kernel:  [<ffffffff8854f168>] :gfs2:gfs2_meta_indirect_buffer+0x104/0x160
Aug  8 19:09:44 server1 kernel:  [<ffffffff8853f786>] :gfs2:recursive_scan+0x96/0x175
Aug  8 19:09:44 server1 kernel:  [<ffffffff8853f82c>] :gfs2:recursive_scan+0x13c/0x175
Aug  8 19:09:44 server1 kernel:  [<ffffffff8854065a>] :gfs2:do_strip+0x0/0x358
Aug  8 19:09:44 server1 kernel:  [<ffffffff88548d21>] :gfs2:glock_work_func+0x0/0xa8
Aug  8 19:09:44 server1 kernel:  [<ffffffff8853f8fe>] :gfs2:trunc_dealloc+0x99/0xe7
Aug  8 19:09:44 server1 kernel:  [<ffffffff8854065a>] :gfs2:do_strip+0x0/0x358
Aug  8 19:09:44 server1 kernel:  [<ffffffff80286595>] deactivate_task+0x28/0x5f
Aug  8 19:09:44 server1 kernel:  [<ffffffff8853fa99>] :gfs2:gfs2_truncatei_resume+0x10/0x1f
Aug  8 19:09:44 server1 kernel:  [<ffffffff8854734d>] :gfs2:do_promote+0x9a/0x117
Aug  8 19:09:44 server1 kernel:  [<ffffffff885485a1>] :gfs2:finish_xmote+0x28c/0x2b2
Aug  8 19:09:44 server1 kernel:  [<ffffffff88548d3e>] :gfs2:glock_work_func+0x1d/0xa8
Aug  8 19:09:44 server1 kernel:  [<ffffffff8024ee76>] run_workqueue+0x94/0xe4
Aug  8 19:09:44 server1 kernel:  [<ffffffff8024b781>] worker_thread+0x0/0x122
Aug  8 19:09:44 server1 kernel:  [<ffffffff80299dd0>] keventd_create_kthread+0x0/0xc4
Aug  8 19:09:44 server1 kernel:  [<ffffffff8024b871>] worker_thread+0xf0/0x122
Aug  8 19:09:44 server1 kernel:  [<ffffffff80286d8b>] default_wake_function+0x0/0xe
Aug  8 19:09:44 server1 kernel:  [<ffffffff80299dd0>] keventd_create_kthread+0x0/0xc4
Aug  8 19:09:44 server1 kernel:  [<ffffffff80299dd0>] keventd_create_kthread+0x0/0xc4
Aug  8 19:09:44 server1 kernel:  [<ffffffff802334b4>] kthread+0xfe/0x132
Aug  8 19:09:44 server1 kernel:  [<ffffffff8025fb2c>] child_rip+0xa/0x12
Aug  8 19:09:44 server1 kernel:  [<ffffffff80299dd0>] keventd_create_kthread+0x0/0xc4
Aug  8 19:09:44 server1 kernel:  [<ffffffff802333b6>] kthread+0x0/0x132
Aug  8 19:09:44 server1 kernel:  [<ffffffff8025fb22>] child_rip+0x0/0x12

<frown>

Any suggestions?

Thanks...


> Hi Wendell, 
> 
> Actually, the first run is still at 59% into pass1. My hardware 
> might be a touch slower than yours though. So far everything has 
> checked out and it hasn't found and/or fixed any problems. 
> I'm hoping it's done by Monday. 
> 
> Regards, 
> 
> Bob Peterson 
> Red Hat File Systems 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux