Hi, do you have any idea if these bugs in the gfs2 kernel module are solved yet? I must say i'm using gfs2 whitin xen virtual machines, the partition is taken from a LUN. 1)under no load randomly kernel BUG at fs/gfs2/glock.c:1131! invalid opcode: 0000 [0000001] SMP last sysfs file: /fs/gfs2/Mycluster:gfs2http/lock_module/recover_done Modules linked in: lock_dlm gfs2 dlm configfs xennet ipv6 dm_multipath parport_pc lp parport pcspkr dm_snapshot dm_zero dm_mirror dm_mod xenblk ext3 jbd ehci_hcd ohci_hcd uhci_hcd CPU: 0 EIP: 0061:[<ee39a02f>] Not tainted VLI EFLAGS: 00010296 (2.6.18-53.1.6.el5xen 0000001) EIP is at gfs2_glock_nq+0xec/0x18e [gfs2] eax: 00000020 ebx: ed781cf8 ecx: 00000001 edx: f5416000 esi: ed781cf8 edi: ebf4fc34 ebp: ebf4fc34 esp: ebf21de8 ds: 007b es: 007b ss: 0069 Process gfs2_quotad (pid: 2458, ti=ebf21000 task=ed31b000 task.ti=ebf21000) Stack: ee3af190 00000002 00000003 ee3af183 0000099a ee3af190 00000002 00000003 ee3af183 0000099a ebce1000 00000000 ebf4fc34 ebce1000 ec1fa364 ebcdd688 ee3a99ed ed781cf8 00020050 c0669780 ed781cf8 00000001 ec8792ec 00000001 Call Trace: [<ee3a99ed>] gfs2_rindex_hold+0x31/0x188 [gfs2] [<ee399f32>] glock_wait_internal+0x1db/0x1ec [gfs2] [<c044d4af>] __alloc_pages+0x57/0x282 [<ee3aa34b>] gfs2_inplace_reserve_i+0xa3/0x57d [gfs2] [<ee399f32>] glock_wait_internal+0x1db/0x1ec [gfs2] [<c042f749>] down_read+0x8/0x11 [<ee39e61b>] gfs2_log_reserve+0x11a/0x171 [gfs2] [<ee3ad294>] gfs2_do_trans_begin+0xe3/0x119 [gfs2] [<ee3a7550>] do_sync+0x2bb/0x5aa [gfs2] [<ee39fe8c>] getbuf+0xfc/0x106 [gfs2] [<ee3a7395>] do_sync+0x100/0x5aa [gfs2] [<ee3a80eb>] gfs2_quota_sync+0x200/0x26a [gfs2] [<ee392624>] gfs2_quotad+0x0/0x12c [gfs2] [<ee3926cd>] gfs2_quotad+0xa9/0x12c [gfs2] [<c042cc71>] kthread+0xc0/0xeb [<c042cbb1>] kthread+0x0/0xeb [<c0403005>] kernel_thread_helper+0x5/0xb ======================= Code: d2 8b 56 20 b8 b3 f1 3a ee e8 6b c4 09 d2 ff 76 0c 68 83 f1 3a ee e8 7b 34 08 d2 ff 77 20 ff 77 14 68 90 f1 3a ee e8 6b 34 08 d2 <0f> 0b 6b 04 87 ef 3a ee 83 c4 28 8b 5e 0c 8d 4f 48 8b 47 48 eb EIP: [<ee39a02f>] gfs2_glock_nq+0xec/0x18e [gfs2] SS:ESP 0069:ebf21de8 <0>Kernel panic - not syncing: Fatal exception 2)As before but maybe without quota initialized yet: original: gfs2_unlink+0x8c/0x160 [gfs2] new: gfs2_unlink+0x8c/0x160 [gfs2] ------------[ cut here ]------------ kernel BUG at fs/gfs2/glock.c:1131! invalid opcode: 0000 [0000001] SMP last sysfs file: /kernel/dlm/gfs2http/control Modules linked in: lock_dlm gfs2 dlm configfs ipv6 xennet dm_mirror dm_multipath dm_mod parport_pc lp parport pcspkr xenblk ext3 jbd ehci_hcd ohci_hcd uhci_hcd CPU: 0 EIP: 0061:[<ee35f02f>] Not tainted VLI EFLAGS: 00010296 (2.6.18-53.1.6.el5xen 0000001) EIP is at gfs2_glock_nq+0xec/0x18e [gfs2] eax: 00000020 ebx: ed39feec ecx: 00000001 edx: f5416000 esi: ed39feec edi: ecac79ac ebp: ecac79ac esp: ed39fe58 ds: 007b es: 007b ss: 0069 Process tar (pid: 5447, ti=ed39f000 task=c03e6000 task.ti=ed39f000) Stack: ee374190 00000003 00000001 ee374183 00001547 ee374190 00000003 00000001 ee374183 00001547 eccf1000 00000000 ed39fea4 00000000 ec029464 eaf941ac ee369715 eb325e60 eccf1000 ebfe890c ebfe890c ebfe88d4 00001547 00000001 Call Trace: [<ee369715>] gfs2_unlink+0xb8/0x160 [gfs2] [<ee3696a9>] gfs2_unlink+0x4c/0x160 [gfs2] [<ee3696c0>] gfs2_unlink+0x63/0x160 [gfs2] [<ee3696e9>] gfs2_unlink+0x8c/0x160 [gfs2] [<ee36e9e4>] gfs2_rindex_hold+0x28/0x188 [gfs2] [<c04738a1>] vfs_unlink+0xa3/0xd9 [<c0475423>] do_unlinkat+0x85/0x10e [<c04079fe>] do_syscall_trace+0xab/0xb1 [<c040534f>] syscall_call+0x7/0xb ======================= Code: d2 8b 56 20 b8 b3 41 37 ee e8 6b 74 0d d2 ff 76 0c 68 83 41 37 ee e8 7b e4 0b d2 ff 77 20 ff 77 14 68 90 41 37 ee e8 6b e4 0b d2 <0f> 0b 6b 04 87 3f 37 ee 83 c4 28 8b 5e 0c 8d 4f 48 8b 47 48 eb EIP: [<ee35f02f>] gfs2_glock_nq+0xec/0x18e [gfs2] SS:ESP 0069:ed39fe58 <0>Kernel panic - not syncing: Fatal exception 3)under heavy load apparently dlm: lockspace 30003 from 3 type 1 not found dlm: lockspace 30003 from 5 type 1 not found dlm: lockspace 30003 from 2 type 1 not found dlm: drop message 7 from 5 for unknown lockspace 196611 dlm: lockspace 30003 from 2 type 1 not found BUG: unable to handle kernel NULL pointer dereference at virtual address 00000054 printing eip: ee320e07 2b9c3000 -> *pde = 00000000:a8240001 2b94e000 -> *pme = 00000000:c0e03067 2bac3000 -> *pte = 00000000:00000000 Oops: 0002 [0000001] SMP last sysfs file: /kernel/dlm/gfs2http/control Modules linked in: ipv6 lock_dlm gfs2 dlm configfs xennet dm_mirror dm_multipath dm_mod parport_pc lp parport pcspkr xenblk ext3 jbd ehci_hcd ohci_hcd uhci_hcd CPU: 0 EIP: 0061:[<ee320e07>] Not tainted VLI EFLAGS: 00010286 (2.6.18-53.1.6.el5xen 0000001) EIP is at revoke_lo_add+0x12/0x2f [gfs2] eax: ed1ee000 ebx: ee320df5 ecx: ea2761f4 edx: 00000000 esi: ed1ee000 edi: ed1ee000 ebp: 00000000 esp: ed7dbe0c ds: 007b es: 007b ss: 0069 Process pdflush (pid: 89, ti=ed7db000 task=ed7d8000 task.ti=ed7db000) Stack: ee32f37d ea2a3dd4 ea2761e4 ee321ffd 00000000 ea2a3dd4 ea2761e4 c1983b80 ed1ee000 ee322e6e 00000000 ea2a3dd4 00000000 c1983b80 ed05a4e4 000000b0 00000000 ee322fff ed7dbf74 ed1ee000 c1983b80 00000001 00000000 ed7dbf74 Call Trace: [<ee32f37d>] gfs2_trans_add_revoke+0x51/0x54 [gfs2] [<ee321ffd>] gfs2_remove_from_journal+0xf1/0x101 [gfs2] [<ee322e6e>] gfs2_invalidatepage+0xd5/0x12d [gfs2] [<ee322fff>] gfs2_writepage+0xb4/0xec [gfs2] [<c0485cfe>] mpage_writepages+0x19b/0x304 [<ee322f4b>] gfs2_writepage+0x0/0xec [gfs2] [<c044dd92>] do_writepages+0x2b/0x32 [<c048455a>] __writeback_single_inode+0x168/0x2a7 [<c048496e>] sync_sb_inodes+0x170/0x213 [<c0484bbf>] writeback_inodes+0x6a/0xb0 [<c044e1d5>] wb_kupdate+0x7b/0xdb [<c044e5ed>] pdflush+0x0/0x1af [<c044e700>] pdflush+0x113/0x1af [<c044e15a>] wb_kupdate+0x0/0xdb [<c042cc71>] kthread+0xc0/0xeb [<c042cbb1>] kthread+0x0/0xeb [<c0403005>] kernel_thread_helper+0x5/0xb ======================= Code: 19 33 ee 68 ab 18 33 ee b9 db ff 32 ee 89 f0 e8 bf e8 00 00 58 5a 5b 5e c3 89 d1 89 e2 81 e2 00 f0 ff ff 8b 12 8b 92 cc 04 00 00 <ff> 42 54 c7 42 34 01 00 00 00 8d 90 e4 05 00 00 ff 80 c8 05 00 EIP: [<ee3 -- mr -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster