Re: SV: GFS2 problem.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I was thinking more of the kernel version rather than the tools version.
What does uname -a say? The fact that it was built on the 4th May
suggests that it doesn't have all the latest bug fixes in. Genernally
I'd suggest following the upstream Linus' kernels (or the -nmw git tree)
to get the latest GFS2 kernels.

Fedora 7 is fairly uptodate now, and FC5/6 had updates relatively
recently too. There will also be uptodate GFS2 code appearing in the
forthcoming RHEL 5.1,

Steve.

On Mon, 2007-06-04 at 11:13 +0200, Kristoffer Lippert wrote:
> Hi,
> 
> I'm using version 0.1.25-1.el5 of gfs2-utils, but i guess the actual version of GFS2 is the one included in cman-2.0.64-1.el5
>  
> According to the message log its:
> Jun  4 08:37:23 app02 kernel: GFS2 (built May  4 2007 22:16:07) installed
> 
> 
> Is there a newer version?
> 
> Mvh / Kind regards
> 
> Kristoffer Lippert
> Systemansvarlig
> JP/Politiken A/S
> Online Magasiner
> 
> Tlf. +45 8738 3032
> Cell. +45 6062 8703
> 
> -----Oprindelig meddelelse-----
> Fra: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] På vegne af Steven Whitehouse
> Sendt: 4. juni 2007 11:00
> Til: linux clustering
> Emne: Re:  GFS2 problem.
> 
> Hi,
> 
> What version of GFS2 are you using? This has been seen before, but not for a fair time now, so I suspect you might be using old code,
> 
> Steve.
> 
> On Mon, 2007-06-04 at 10:59 +0200, Kristoffer Lippert wrote:
> > Hi,
> > 
> > I get a slightly strange problem:
> > 
> > I'm setting up a new 2. node cluster, and one of my nodes gives this 
> > in the message log:
> > 
> > Jun  4 10:10:33 app01 kernel: GFS2: fsid=onmagcluster:onmag_gfs.1:
> > fatal: assertion "gfs2_glock_is_held_excl(gl)" failed
> > 
> > Jun  4 10:10:33 app01 kernel: GFS2: fsid=onmagcluster:onmag_gfs.1:
> > function = glock_lo_after_commit, file = fs/gfs2/lops.c, line = 61
> > 
> > Jun  4 10:10:33 app01 kernel: GFS2: fsid=onmagcluster:onmag_gfs.1:
> > about to withdraw this file system
> > Jun  4 10:10:33 app01 kernel: GFS2: fsid=onmagcluster:onmag_gfs.1:
> > telling LM to withdraw
> > 
> > After the error, the san is locked. I can't get the second node off 
> > the lock. I did a ls of /sandata (where the san is mounted), and i 
> > can't kill the process.
> > 
> > I cannot even reboot the server with reboot. I have to powercycle it 
> > to get it back online.
> > 
> > I found one similar error on google, posted to this list, but i did 
> > not find any replys.
> > Have anyone got any clues? 
> > 
> > 
> > Mvh / Kind regards
> > 
> > Kristoffer Lippert
> > Systemansvarlig
> > JP/Politiken A/S
> > Online Magasiner
> > 
> > Tlf. +45 8738 3032
> > Cell. +45 6062 8703
> > 
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux