Re: GFS2 error recovering journal 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

The first thing to try is running fsck on it, the more recent the
version of fsck, the better. The filesystem is refusing to mount because
it thinks that there is something wrong with the journal, so it looks
like it needs manual correction,

Steve.

On Wed, 2008-12-10 at 19:11 -0600, Nathan Stratton wrote:
> I have a production system that is down right now, any help would be 
> greatly appreciated.
> 
> I get a panic when I try to mount:
> 
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: Joined cluster. 
> Now mounting FS...
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: jid=0, already 
> locked for use
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: jid=0: Looking at 
> journal...
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: fatal: filesystem 
> consistency error
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0:   inode = 4 53
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0:   function = 
> jhead_scan, file = fs/gfs2/recovery.c, line = 239
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: about to withdraw 
> this file system
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: telling LM to 
> withdraw
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: withdrawn
> Dec 10 18:53:41 xen0 kernel:
> Dec 10 18:53:41 xen0 kernel: Call Trace:
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8863d0ee>] 
> :gfs2:gfs2_lm_withdraw+0xc1/0xd0
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff886497bc>] 
> :gfs2:gfs2_replay_read_block+0x78/0x89
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8864986a>] 
> :gfs2:get_log_header+0x9d/0xe7
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8864ee4f>] 
> :gfs2:gfs2_consist_inode_i+0x43/0x48
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88649a12>] 
> :gfs2:gfs2_find_jhead+0xf5/0x119
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88649b77>] 
> :gfs2:gfs2_recover_journal+0x141/0x837
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88640558>] 
> :gfs2:gfs2_meta_read+0x17/0x65
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802648f1>] 
> _spin_lock_irqsave+0x9/0x14
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80222b5c>] __up_read+0x19/0x7f
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88631111>] 
> :gfs2:gfs2_block_map+0x32b/0x33e
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8864425c>] 
> :gfs2:map_journal_extents+0x6f/0x13b
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff886312cd>] 
> :gfs2:gfs2_write_alloc_required+0xfd/0x122
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff886445d5>] 
> :gfs2:init_journal+0x2ad/0x40c
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8864cc3a>] 
> :gfs2:gfs2_jindex_hold+0x54/0x19c
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88644793>] 
> :gfs2:init_inodes+0x5f/0x1d3
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88644d27>] 
> :gfs2:fill_super+0x420/0x571
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8863a56b>] 
> :gfs2:gfs2_glock_nq_num+0x3b/0x68
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802cf7bb>] set_bdev_super+0x0/0xf
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802cf7ca>] test_bdev_super+0x0/0xd
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff88644907>] 
> :gfs2:fill_super+0x0/0x571
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802d077e>] get_sb_bdev+0x10a/0x164
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802cac1d>] __kmalloc+0x8f/0x9f
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff886439e5>] 
> :gfs2:gfs2_get_sb+0x13/0x2f
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802d011b>] 
> vfs_kern_mount+0x93/0x11a
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802d01e4>] do_kern_mount+0x36/0x4d
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802d9866>] do_mount+0x6a7/0x717
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8020622a>] 
> hypercall_page+0x22a/0x1000
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8020ba3a>] 
> free_hot_cold_page+0x107/0x14d
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8020ae99>] 
> get_page_from_freelist+0x32e/0x3bc
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80264997>] _read_lock_irq+0x9/0x19
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802071e1>] find_get_page+0x4d/0x54
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80213b9c>] 
> filemap_nopage+0x188/0x322
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802090c2>] 
> __handle_mm_fault+0x755/0x11bd
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff802648f1>] 
> _spin_lock_irqsave+0x9/0x14
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80222b5c>] __up_read+0x19/0x7f
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8020ad5f>] 
> get_page_from_freelist+0x1f4/0x3bc
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8020f412>] 
> __alloc_pages+0x65/0x2ce
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8022b52a>] iput+0x4b/0x84
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff8024de16>] sys_mount+0x8a/0xcd
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80260106>] system_call+0x86/0x8b
> Dec 10 18:53:41 xen0 kernel:  [<ffffffff80260080>] system_call+0x0/0x8b
> Dec 10 18:53:41 xen0 kernel:
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: jid=0: Failed
> Dec 10 18:53:41 xen0 kernel: GFS2: fsid=xen_sjc:share.0: error recovering 
> journal 0: -5
> 
> 
> 
> ><>
> Nathan Stratton                                CTO, BlinkMind, Inc.
> nathan at robotics.net                         nathan at blinkmind.com
> http://www.robotics.net                        http://www.blinkmind.com
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux