(resend with better cc list)
On 11/30/2009 01:54 PM, Harald Dunkel wrote:
Sorry, wrong kernel. Here is the output for 2.6.31.6:
[ 374.736010] BUG: soft lockup - CPU#0 stuck for 61s! [ntpd:1657]
[ 374.736010] Modules linked in: ipv6 loop snd_pcm snd_timer snd soundcore snd_page_alloc virtio_balloon psmouse serio_raw pcspkr evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic ata_piix libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd virtio_pci virtio_ring virtio floppy ehci_hcd ide_core thermal fan thermal_sys [last unloaded: scsi_wait_scan]
[ 374.736010] CPU 0:
[ 374.736010] Modules linked in: ipv6 loop snd_pcm snd_timer snd soundcore snd_page_alloc virtio_balloon psmouse serio_raw pcspkr evdev i2c_piix4 i2c_core button processor reiserfs ide_cd_mod cdrom ata_generic ata_piix libata scsi_mod ide_pci_generic virtio_blk virtio_net piix uhci_hcd virtio_pci virtio_ring virtio floppy ehci_hcd ide_core thermal fan thermal_sys [last unloaded: scsi_wait_scan]
[ 374.736010] Pid: 1657, comm: ntpd Not tainted 2.6.31.6 #1
[ 374.736010] RIP: 0010:[<ffffffff8102524d>] [<ffffffff8102524d>] kvm_deferred_mmu_op+0x58/0xd6
[ 374.736010] RSP: 0018:ffff88003d8ffc68 EFLAGS: 00000293
[ 374.736010] RAX: 0000000000000000 RBX: 0000000000000016 RCX: 000000003d8ffcaa
[ 374.736010] RDX: 0000000000000000 RSI: 0000000000000018 RDI: ffff88003d8ffcaa
[ 374.736010] RBP: ffffffff8100c5ae R08: 0000000000000080 R09: ffffea0000a8a598
[ 374.736010] R10: 000000000003a0d5 R11: 0000000000000001 R12: 00000000000280da
[ 374.736010] R13: 000000003d8ffe48 R14: ffff880000001700 R15: 000000000000fdf0
[ 374.736010] FS: 00007fa19b21f6f0(0000) GS:ffff8800015ac000(0000) knlGS:0000000000000000
[ 374.736010] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 374.736010] CR2: 00007fa19b229000 CR3: 000000003dcad000 CR4: 00000000000006f0
[ 374.736010] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 374.736010] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 374.736010] Call Trace:
[ 374.736010] [<ffffffff81025241>] ? kvm_deferred_mmu_op+0x4c/0xd6
[ 374.736010] [<ffffffff8102531b>] ? kvm_mmu_write+0x2b/0x31
[ 374.736010] [<ffffffff810b7840>] ? handle_mm_fault+0x300/0x77d
[ 374.736010] [<ffffffff8111b49f>] ? seq_release_net+0x0/0x3b
[ 374.736010] [<ffffffff81028f29>] ? do_page_fault+0x25f/0x27b
[ 374.736010] [<ffffffff812a19a5>] ? page_fault+0x25/0x30
[ 374.736010] [<ffffffff81171bfd>] ? copy_user_generic_string+0x2d/0x40
[ 374.736010] [<ffffffff810ea37c>] ? seq_read+0x300/0x380
[ 374.736010] [<ffffffff81113e9d>] ? proc_reg_read+0x6d/0x88
[ 374.736010] [<ffffffff810d3ca2>] ? vfs_read+0xaa/0x166
[ 374.736010] [<ffffffff810d3e1a>] ? sys_read+0x45/0x6e
[ 374.736010] [<ffffffff8100ba02>] ? system_call_fastpath+0x16/0x1b
:
:
Hm, pvmmu. Can you provide /proc/cpuinfo on the source (AMD) host?
Marcelo, shouldn't this be inactive after migration from AMD to Intel?
Or maybe hypercall patching is screwing up?
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html