Bob Peterson <rpeterso@xxxxxxxxxx> writes: > On Tue, 2008-02-12 at 19:13 +0100, Wagner Ferenc wrote: > >> I've compiled cluster-2.01.00 against Linux 2.6.23.16. On modprobe >> gfs I got the following two kernel messages: >> >> gfs: no version for "gfs2_unmount_lockproto" found: kernel tainted. >> GFS 2.01.00 (built Feb 12 2008 14:42:50) installed > > The HEAD / RHEL5 / (similar) versions of GFS use part of gfs2's > locking infrastructure. For RHEL5, we did a patch to export > those symbols from GFS2. The patch looks like the one I have > below. [...] > > --- a/fs/gfs2/locking.c 2008-02-11 11:10:57.000000000 -0600 > +++ b/fs/gfs2/locking.c 2008-02-08 14:10:36.000000000 -0600 > @@ -181,4 +181,6 @@ void gfs2_withdraw_lockproto(struct lm_l > > EXPORT_SYMBOL_GPL(gfs2_register_lockproto); > EXPORT_SYMBOL_GPL(gfs2_unregister_lockproto); > - > +EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_mount_lockproto); > +EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto); Actually, I also patched my kernel tree like this. In cases when I forgot it, I wasn't even allowed to load the gfs module into the kernel. In this case the "tainted" warning was related to a slight vermagic mismatch, and after recompiling everything properly, it went away. But the issue remained: the mount command just sits there, consuming some CPU, and by now I've got the following console output (with my notes in the brackets): [modprobe gfs] GFS 2.01.00 (built Feb 12 2008 22:07:48) installed [starting the cluster infrastructure] dlm: Using TCP for communications dlm: connecting to 3 dlm: got connection from 3 [mount /mnt] Trying to join cluster "lock_dlm", "pilot:test" Joined cluster. Now mounting FS... GFS: fsid=pilot:test.4294967295: can't mount journal #4294967295 GFS: fsid=pilot:test.4294967295: there are only 6 journals (0 - 5) [a couple of minutes passed here] GFS: fsid=pilot:test.4294967295: Unmount seems to be stalled. Dumping lock state... Glock (2, 25) gl_flags = gl_count = 2 gl_state = 0 req_gh = no req_bh = no lvb_count = 0 object = yes new_le = no incore_le = no reclaim = no aspace = 0 ail_bufs = no Inode: num = 25/25 type = 1 i_count = 1 i_flags = vnode = no Glock (5, 25) gl_flags = gl_count = 2 gl_state = 3 req_gh = no req_bh = no lvb_count = 0 object = yes new_le = no incore_le = no reclaim = no aspace = no ail_bufs = no Holder owner = -1 gh_state = 3 gh_flags = 5 7 error = 0 gh_iflags = 1 6 7 Now, mount is still stalled, and still consumes 6% of CPU. -- Regards, Feri. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster