Re: gfs:gfs_assert_i+0x67/0x92 seen when node joining cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Fri, 2013-09-27 at 20:30 +0000, Hofmeister, James (HP ESSN BCS Linux
ERT) wrote:
> I am not looking for a deep analysis of this problem, just a search
> for known issues… I have not found a duplicate in my Google and
> bugzilla searches.
> 
The trace looks to me as if the unlinked inodes (hidden file) has become
corrupt on disk for some reason and this has triggered an assert during
mount. Does fsck.gfs not fix this?

It isn't something that I recall seeing before, and even with a detailed
analysis of the on disk filesystem it may not be possible to give an
exact explanation of what has gone wrong, depending on what state the fs
is currently in.

I would certainly double check any fencing configuration in this case to
make sure that it is set up correctly in case that is an issue,

Steve.

>  
> 
> 1)     RHEL version
> 
> Red Hat Enterprise Linux Server release 5.7 (Tikanga)
> 
>  
> 
> 2) gfs* packages version
> 
> gfs2-utils-0.1.62-31.el5.x86_64        Fri 20 Jan 2012 11:25:40 AM COT
> 
> gfs-utils-0.1.20-10.el5.x86_64         Fri 20 Jan 2012 11:25:40 AM COT
> 
> kmod-gfs-0.1.34-15.el5.x86_64          Fri 20 Jan 2012 11:26:53 AM COT
> 
> 
> 3) kernel version
> Linux xxxxxx 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64
> x86_64 
> 
> x86_64 GNU/Linux
> 
>  
> 
> 4) You can also attach cluster.conf for us.
> 
> I will send in the cluster.conf when I open the support call.
> 
>  
> 
> They are taking an error out of gfs not seen/reported at other sites:
> 
> Call Trace:
> 
> [<ffffffff888b2ffb>] :gfs:gfs_assert_i+0x67/0x92
> 
> [<ffffffff888a0ed4>] :gfs:unlinked_scan_elements+0x99/0x180
> 
> [<ffffffff88887a5f>] :gfs:gfs_dreread+0x87/0xc6
> 
> [<ffffffff888acdc8>] :gfs:foreach_descriptor+0x229/0x305
> 
> [<ffffffff888a6d02>] :gfs:fill_super+0x0/0x642
> 
> [<ffffffff888ad12b>] :gfs:gfs_recover_dump+0xdd/0x14e
> 
> [<ffffffff888b1293>] :gfs:gfs_make_fs_rw+0xc0/0x11a
> 
> [<ffffffff888a6766>] :gfs:init_journal+0x279/0x34c
> 
> [<ffffffff888a7190>] :gfs:fill_super+0x48e/0x642
> 
> [<ffffffff800e7461>] get_sb_bdev+0x10a/0x16c
> 
> [<ffffffff800e6dfe>] vfs_kern_mount+0x93/0x11a
> 
> [<ffffffff800e6ec7>] do_kern_mount+0x36/0x4d
> 
> [<ffffffff800f18c5>] do_mount+0x6a9/0x719
> 
> [<ffffffff8008e202>] enqueue_task+0x41/0x56
> 
> [<ffffffff80045ad3>] do_sock_read+0xcf/0x110
> 
> [<ffffffff8022c620>] sock_aio_read+0x4f/0x5e
> 
> [<ffffffff8000cfdf>] do_sync_read+0xc7/0x104
> 
> [<ffffffff800ceeb4>] zone_statistics+0x3e/0x6d
> 
> [<ffffffff8000f470>] __alloc_pages+0x78/0x308
> 
> [<ffffffff8004c0df>] sys_mount+0x8a/0xcd
> 
>  
> 
> Sep 18 04:09:51 hpium2 syslogd 1.4.1: restart.
> 
> Sep 18 04:09:51 hpium2 kernel: klogd 1.4.1, log source = /proc/kmsg
> started.
> 
>  
> 
> Regards,
> 
>   James Hofmeister   Hewlett Packard   Linux Engineering Resolution
> Team 
> 
> 
> -- 
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster





[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux