I am not sure this helps, but I also note that the cachefs usage and the errors seem to correspond to a significant increase in acpi interrupts. Thank you, Joshua On Tue, Aug 18, 2009 at 9:57 AM, Joshua Levine <spamjoshua@xxxxxxxxx> wrote: > Thank you for the reply. The box from last night is down and awaiting > remote hands I am afraid. > > > What I did today, was compile from Linus' git tree and force load on > another machine resulting in: > > > Aug 18 09:32:18 dload05 kernel: [ 268.336887] BUG: unable to handle kernel > NULL pointer dereference at 0000000000000040 > > Aug 18 09:32:18 dload05 kernel: [ 268.337070] IP: [<ffffffffa02cfda5>] > fscache_object_slow_work_execute+0x755/0xb70 [fscache] > > Aug 18 09:32:18 dload05 kernel: [ 268.337214] PGD 377ba067 PUD 77ede067 > PMD 0 > > Aug 18 09:32:18 dload05 kernel: [ 268.337399] Oops: 0002 [#1] SMP > > Aug 18 09:32:18 dload05 kernel: [ 268.337541] last sysfs file: > /sys/kernel/uevent_seqnum > > Aug 18 09:32:18 dload05 kernel: [ 268.337609] CPU 0 > > Aug 18 09:32:18 dload05 kernel: [ 268.337709] Modules linked in: > cachefiles af_packet autofs4 ipv6 nfs lockd fscache nfs_acl auth_rpcgss > sunrpc iptable_filter ip_tables x_tables uhci_hcd usbkbd evdev psmouse > serio_raw pcspkr k8temp button processor i2c_piix4 i2c_core shpchp > pci_hotplug ext3 jbd mbcache usbhid sr_mod cdrom hid sd_mod sg tg3 > ata_generic libphy ehci_hcd sata_svw ohci_hcd pata_serverworks pata_acpi > libata usbcore scsi_mod floppy thermal fan thermal_sys fuse > > Aug 18 09:32:18 dload05 kernel: [ 268.339993] Pid: 4480, comm: kslowd Not > tainted 2.6.31-rc6-iwin-linus-cachefs #1 H8SSL > > Aug 18 09:32:18 dload05 kernel: [ 268.340084] RIP: > 0010:[<ffffffffa02cfda5>] [<ffffffffa02cfda5>] > fscache_object_slow_work_execute+0x755/0xb70 [fscache] > > Aug 18 09:32:18 dload05 kernel: [ 268.340226] RSP: 0018:ffff880077cbbe60 > EFLAGS: 00010246 > > Aug 18 09:32:18 dload05 kernel: [ 268.340294] RAX: 0000000000000000 RBX: > ffff88007e7ba7f8 RCX: 01000000000000c1 > > Aug 18 09:32:18 dload05 kernel: [ 268.340366] RDX: ffff88007e7579d0 RSI: > ffffea0001ba9b08 RDI: ffff88007e7ba79c > > Aug 18 09:32:18 dload05 kernel: [ 268.340437] RBP: ffff88007e7ba780 R08: > 0000000000000000 R09: ffffffff814def00 > > Aug 18 09:32:18 dload05 kernel: [ 268.340509] R10: 0000000000000000 R11: > 0000000000000000 R12: ffff88007e7ba79c > > Aug 18 09:32:18 dload05 kernel: [ 268.340581] R13: ffffffff810ad050 R14: > 00000000ffffe114 R15: 0000000000000000 > > Aug 18 09:32:18 dload05 kernel: [ 268.340654] FS: 00007f5bb302a6f0(0000) > GS:ffff880001627000(0000) knlGS:0000000000000000 > > Aug 18 09:32:18 dload05 kernel: [ 268.340744] CS: 0010 DS: 0018 ES: 0018 > CR0: 000000008005003b > > Aug 18 09:32:18 dload05 kernel: [ 268.340805] CR2: 0000000000000040 CR3: > 0000000074afe000 CR4: 00000000000006f0 > > Aug 18 09:32:18 dload05 kernel: [ 268.340805] DR0: 0000000000000000 DR1: > 0000000000000000 DR2: 0000000000000000 > > Aug 18 09:32:18 dload05 kernel: [ 268.340805] DR3: 0000000000000000 DR6: > 00000000ffff0ff0 DR7: 0000000000000400 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] Process kslowd (pid: 4480, > threadinfo ffff880077cba000, task ffff880037b8a080) > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] Stack: > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] ffff88007e7ba7f8 > 0000000000000001 0000000000000004 ffffffff810ad050 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] <0> 0000000000000000 > ffffffff810ad313 0000000000000001 ffff880037b8a080 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] <0> ffffffff8105c700 > ffff880077cbbea8 ffff880077cbbea8 ffff880077cbbef8 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] Call Trace: > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff810ad050>] ? > slow_work_thread+0x0/0x450 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff810ad313>] ? > slow_work_thread+0x2c3/0x450 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff8105c700>] ? > autoremove_wake_function+0x0/0x30 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff810ad050>] ? > slow_work_thread+0x0/0x450 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff8105c596>] ? > kthread+0xa6/0xb0 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff8100cd9a>] ? > child_rip+0xa/0x20 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff8105c4f0>] ? > kthread+0x0/0xb0 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] [<ffffffff8100cd90>] ? > child_rip+0x0/0x20 > > Aug 18 09:32:18 dload05 kernel: [ 268.341026] Code: ff ff 48 8b 45 60 48 > 83 65 28 fd 4c 8d 65 1c 48 89 ef 48 8b 00 ff 50 18 4c 89 e7 e8 76 67 02 e1 > c7 45 00 06 00 00 00 48 8b 45 68 <f0> 0f ba 70 40 01 19 d2 85 d2 74 12 48 8b > 7d 68 be 01 00 00 00 > > Aug 18 09:32:18 dload05 kernel: [ 268.345014] RIP [<ffffffffa02cfda5>] > fscache_object_slow_work_execute+0x755/0xb70 [fscache] > > Aug 18 09:32:18 dload05 kernel: [ 268.345014] RSP <ffff880077cbbe60> > > Aug 18 09:32:18 dload05 kernel: [ 268.345014] CR2: 0000000000000040 > > Aug 18 09:32:18 dload05 kernel: [ 268.345603] ---[ end trace > 4b1e94ca8e4d77ee ]--- > > > > In an attempt to debug I was able to retrieve the below: > > > This GDB was configured as "x86_64-linux-gnu". > > (gdb) file vmlinux > > Reading symbols from /usr/local/src/work/vmlinux...done. > > (gdb) list *fscache_object_slow_work_execute+0x755 > > No symbol "fscache_object_slow_work_execute" in current context. > > (gdb) list *slow_work_thread+0x0 > > 0xffffffff810ad050 is in slow_work_thread (kernel/slow-work.c:370). > > 365 > > 366 /* > > 367 * Worker thread dispatcher > > 368 */ > > 369 static int slow_work_thread(void *_data) > > 370 { > > 371 int vsmax; > > 372 > > 373 DEFINE_WAIT(wait); > > 374 > > > (gdb) list *slow_work_thread+0x2c3 > > 0xffffffff810ad313 is in slow_work_thread (kernel/slow-work.c:199). > > 194 if (!test_and_clear_bit(SLOW_WORK_PENDING, &work->flags)) > > 195 BUG(); > > 196 > > 197 work->ops->execute(work); > > 198 > > 199 if (very_slow) > > 200 atomic_dec(&vslow_work_executing_count); > > 201 clear_bit_unlock(SLOW_WORK_EXECUTING, &work->flags); > > 202 > > 203 /* if someone tried to enqueue the item whilst we were executing > it, > > > (gdb) list *autoremove_wake_function+0x0 > > 0xffffffff8105c700 is in autoremove_wake_function (kernel/wait.c:164). > > 159 spin_unlock_irqrestore(&q->lock, flags); > > 160 } > > 161 EXPORT_SYMBOL(abort_exclusive_wait); > > 162 > > 163 int autoremove_wake_function(wait_queue_t *wait, unsigned mode, int > sync, void *key) > > 164 { > > 165 int ret = default_wake_function(wait, mode, sync, key); > > 166 > > 167 if (ret) > > 168 list_del_init(&wait->task_list); > > > (gdb) list *kthread+0xa6 > > 0xffffffff8105c596 is in kthread (kernel/kthread.c:80). > > 75 complete(&create->done); > > 76 schedule(); > > 77 > > 78 ret = -EINTR; > > 79 if (!self.should_stop) > > 80 ret = threadfn(data); > > 81 > > 82 /* we can't just return, we must preserve "self" on stack */ > > 83 do_exit(ret); > > 84 } > > > (gdb) list *child_rip+0xa > > 0xffffffff8100cd9a is at arch/x86/kernel/entry_64.S:1219. > > 1214 */ > > 1215 movq %rdi, %rax > > 1216 movq %rsi, %rdi > > 1217 call *%rax > > 1218 # exit > > 1219 mov %eax, %edi > > 1220 call do_exit > > 1221 ud2 # padding for call trace > > 1222 CFI_ENDPROC > > 1223 END(child_rip) > > > (gdb) list *kthread+0x0 > > 0xffffffff8105c4f0 is in kthread (kernel/kthread.c:60). > > 55 return to_kthread(current)->should_stop; > > 56 } > > 57 EXPORT_SYMBOL(kthread_should_stop); > > 58 > > 59 static int kthread(void *_create) > > 60 { > > 61 /* Copy data: it's on kthread's stack */ > > 62 struct kthread_create_info *create = _create; > > 63 int (*threadfn)(void *data) = create->threadfn; > > 64 void *data = create->data; > > > (gdb) list *child_rip+0x0 > > 0xffffffff8100cd90 is at arch/x86/kernel/entry_64.S:1209. > > 1204 ret > > 1205 CFI_ENDPROC > > 1206 END(kernel_thread) > > 1207 > > 1208 ENTRY(child_rip) > > 1209 pushq $0 # fake return address > > 1210 CFI_STARTPROC > > 1211 /* > > 1212 * Here we are in the child and the registers are set as they > were > > 1213 * at kernel_thread() invocation in the parent. > > > Please let me know what I can do to further help narrow this down. > > > Thank you, > > Joshua > -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cachefs