Bugs item #2097242, was opened at 2008-09-06 17:56 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2097242&group_id=180599 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Cong Wang (jameswang99) Assigned to: Nobody/Anonymous (nobody) Summary: kernel null pointer dereference Initial Comment: Hi: I was bringing up an amd64 version of Ubuntu guest on my x86_64 host using the following command: ./kvm --smp=4 --image=../vm-imgs/ubuntu-4G.qcow2 --cdrom=../ubuntu-8.04.1-server-amd64.iso --no-tap --memory=1024 Then, the system responds with: kvm: emulating preempt notifiers; do not benchmark on this machine loaded kvm module (kvm-74) Unable to handle kernel NULL pointer dereference at 000000000000006b RIP: [<ffffffff803a2312>] _raw_spin_trylock+0x6/0x2f PGD 12e05b067 PUD 12b847067 PMD 0 Oops: 0002 [1] SMP CPU 1 Modules linked in: kvm_intel kvm ndiswrapper nvidia(P) Pid: 6205, comm: qemu-system-x86 Tainted: P 2.6.24-gentoo-r8 #9 RIP: 0010:[<ffffffff803a2312>] [<ffffffff803a2312>] _raw_spin_trylock+0x6/0x2f RSP: 0018:ffff81012e0e1b38 EFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000083 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000006b RBP: ffff81012e0e1b38 R08: 0000000000000001 R09: 0000000000000001 R10: ffffffff8039ccf6 R11: 0000000000000002 R12: 000000000000006b R13: 0000000000000296 R14: ffff81012e0e1d48 R15: 0000000000000000 FS: 00002b359eda7d50(0000) GS:ffff810138407c40(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 000000000000006b CR3: 0000000130c42000 CR4: 00000000000026e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process qemu-system-x86 (pid: 6205, threadinfo ffff81012e0e0000, task ffff81012bb9cbc0) Stack: ffff81012e0e1b68 ffffffff805e93d2 0000000000000063 0000000000000063 000000000000006b ffff81012e25c000 ffff81012e0e1b88 ffffffff8039ccf6 00000000000000b3 0000000000000063 ffff81012e0e1ba8 ffffffff805e7e92 Call Trace: [<ffffffff805e93d2>] _spin_lock_irqsave+0x38/0x64 [<ffffffff8039ccf6>] __down_write_trylock+0x16/0x46 [<ffffffff805e7e92>] down_write+0x45/0x6a [<ffffffff887f763d>] :kvm:kvm_arch_set_memory_region+0x6a/0x187 [<ffffffff887f31cd>] :kvm:__kvm_set_memory_region+0x330/0x3bd [<ffffffff887f3289>] :kvm:kvm_set_memory_region+0x2f/0x44 [<ffffffff888166e0>] :kvm_intel:vmx_set_tss_addr+0x41/0x56 [<ffffffff887f9481>] :kvm:kvm_arch_vm_ioctl+0x100/0x874 [<ffffffff8039cdb5>] __up_read+0x8f/0x97 [<ffffffff887f3781>] :kvm:kvm_vm_ioctl+0x20c/0x227 [<ffffffff805eb1e2>] do_page_fault+0x42e/0x7f5 [<ffffffff8029c936>] do_ioctl+0x2a/0x77 [<ffffffff8029cbde>] vfs_ioctl+0x25b/0x278 [<ffffffff8029cc3d>] sys_ioctl+0x42/0x65 [<ffffffff8020b7be>] system_call+0x7e/0x83 Code: 87 07 31 d2 85 c0 0f 9f c2 85 d2 74 18 65 8b 04 25 24 00 00 RIP [<ffffffff803a2312>] _raw_spin_trylock+0x6/0x2f RSP <ffff81012e0e1b38> CR2: 000000000000006b ---[ end trace 0cce785f1c425e40 ]--- BUG: sleeping function called from invalid context at kernel/rwsem.c:21 in_atomic():0, irqs_disabled():1 INFO: lockdep is turned off. Pid: 6205, comm: qemu-system-x86 Tainted: P D 2.6.24-gentoo-r8 #9 Call Trace: [<ffffffff802314c4>] __might_sleep+0xc6/0xc8 [<ffffffff805e7ed7>] down_read+0x20/0x6d [<ffffffff80239a24>] exit_mm+0x34/0xf7 [<ffffffff8023b1c7>] do_exit+0x247/0x77d [<ffffffff805eb4b5>] do_page_fault+0x701/0x7f5 [<ffffffff8026e050>] __alloc_pages+0x84/0x32c [<ffffffff8027442e>] zone_statistics+0x64/0x69 [<ffffffff805e964d>] error_exit+0x0/0x9a [<ffffffff8039ccf6>] __down_write_trylock+0x16/0x46 [<ffffffff803a2312>] _raw_spin_trylock+0x6/0x2f [<ffffffff805e93d2>] _spin_lock_irqsave+0x38/0x64 [<ffffffff8039ccf6>] __down_write_trylock+0x16/0x46 [<ffffffff805e7e92>] down_write+0x45/0x6a [<ffffffff887f763d>] :kvm:kvm_arch_set_memory_region+0x6a/0x187 [<ffffffff887f31cd>] :kvm:__kvm_set_memory_region+0x330/0x3bd [<ffffffff887f3289>] :kvm:kvm_set_memory_region+0x2f/0x44 [<ffffffff888166e0>] :kvm_intel:vmx_set_tss_addr+0x41/0x56 [<ffffffff887f9481>] :kvm:kvm_arch_vm_ioctl+0x100/0x874 [<ffffffff8039cdb5>] __up_read+0x8f/0x97 [<ffffffff887f3781>] :kvm:kvm_vm_ioctl+0x20c/0x227 [<ffffffff805eb1e2>] do_page_fault+0x42e/0x7f5 [<ffffffff8029c936>] do_ioctl+0x2a/0x77 [<ffffffff8029cbde>] vfs_ioctl+0x25b/0x278 [<ffffffff8029cc3d>] sys_ioctl+0x42/0x65 [<ffffffff8020b7be>] system_call+0x7e/0x83 uname -a outputs: Linux localhost 2.6.24-gentoo-r8 #9 SMP Thu Sep 4 11:18:08 CDT 2008 x86_64 Intel(R) Core(TM)2 Duo CPU T8300 @ 2.40GHz GenuineIntel GNU/Linux The version of KVM is KVM-74. ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2009-02-02 02:34 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Avi Kivity (avik) Date: 2008-12-24 13:44 Message: Is this repeatable with current kvm? The only way I see this can happen is if current->mm is NULL, which shouldn't happen. ---------------------------------------------------------------------- Comment By: Glauber de Oliveira Costa (glommer) Date: 2008-09-08 14:49 Message: Logged In: YES user_id=576830 Originator: NO Are you sure this is a kvm, not a kernel problem in the guest? Can you identify an earlier version of kvm in which this problem does not happen, for example? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2097242&group_id=180599 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html