[Bug 188171] Nested Virtualization via VT-x | Virtualbox in KVM: cannot launch virtualbox guest OS due to 'general protection fault: 0000 [#1] SMP'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



https://bugzilla.kernel.org/show_bug.cgi?id=188171

Paul <paulkek@xxxxxxxxxxxxxx> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |paulkek@xxxxxxxxxxxxxx

--- Comment #1 from Paul <paulkek@xxxxxxxxxxxxxx> ---
I guess this isn't fixed, even in mainline. I've just checked and it seems that
this is 100% reproducible.

L1: KVM * (Linux 4.10-rc4)
L2: VirtualBox /Vmware (Linux 4.9.4)

* dmesg when runninng virtualbox

[   40.583722] SUPR0GipMap: fGetGipCpu=0xb
[   41.047407] general protection fault: 0000 [#1] SMP
[   41.047410] Modules linked in: nls_utf8 udf crc_itu_t fuse joydev uinput
xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_netbios_ns
nf_conntrack_broadcast xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6
xt_conntrack ip_set nfnetlink ebtable_broute bridge stp llc ebtable_nat
ip6table_raw ip6table_security ip6table_mangle ip6table_nat nf_conntrack_ipv6
nf_defrag_ipv6 nf_nat_ipv6 iptable_raw iptable_security iptable_mangle
iptable_nat nf_conntrack_ipv4 vboxpci(OE) nf_defrag_ipv4 vboxnetadp(OE)
nf_nat_ipv4 nf_nat vboxnetflt(OE) nf_conntrack ebtable_filter ebtables
ip6table_filter ip6_tables vboxdrv(OE) kvm_intel kvm irqbypass crct10dif_pclmul
crc32_pclmul ghash_clmulni_intel ppdev virtio_balloon qemu_fw_cfg parport_pc
parport acpi_cpufreq tpm_tis tpm_tis_core tpm i2c_piix4 nfsd auth_rpcgss
[   41.047423]  nfs_acl lockd grace sunrpc virtio_net virtio_blk virtio_console
qxl drm_kms_helper ttm drm virtio_pci crc32c_intel serio_raw virtio_ring virtio
ata_generic pata_acpi
[   41.047427] CPU: 1 PID: 2191 Comm: EMT Tainted: G           OE  
4.9.3-200.fc25.x86_64 #1
[   41.047428] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.10.1-20161122_114906-anatol 04/01/2014
[   41.047429] task: ffff9d56e4fdbe00 task.stack: ffffc0ab427a8000
[   41.047429] RIP: 0010:[<ffffffffc000baa7>]  [<ffffffffc000baa7>]
0xffffffffc000baa7
[   41.047431] RSP: 0018:ffffc0ab427abd58  EFLAGS: 00050206
[   41.047432] RAX: 00000000003406e0 RBX: 00000000ffffffdb RCX:
000000000000009b
[   41.047432] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
ffffc0ab427abcb0
[   41.047587] RBP: ffffc0ab427abd78 R08: 0000000000000004 R09:
00000000003406e0
[   41.047588] R10: 0000000049656e69 R11: 000000000f8bfbff R12:
0000000000000020
[   41.047588] R13: 0000000000000000 R14: ffffc0ab4800107c R15:
ffffffffc04922a0
[   41.047589] FS:  00007f27613cb700(0000) GS:ffff9d577fd00000(0000)
knlGS:0000000000000000
[   41.047590] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   41.047591] CR2: 00007f2761158000 CR3: 0000000138bf0000 CR4:
00000000003406e0
[   41.047592] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[   41.047593] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[   41.047593] Stack:
[   41.047594]  0000000000000000 ffffffff00000000 0000000000000000
0000000000000002
[   41.047595]  ffffc0ab427abd98 ffffffffc0026a23 ffffc0ab48001010
ffff9d5779db1a90
[   41.047597]  ffffc0ab427abe18 ffffffffc0457420 ffffc0ab427abdf8
0000000000040296
[   41.047598] Call Trace:
[   41.047606]  [<ffffffffc0457420>] ? supdrvIOCtl+0x2dc0/0x32c0 [vboxdrv]
[   41.047609]  [<ffffffffc04505e0>] ? VBoxDrvLinuxIOCtl_5_1_14+0x150/0x250
[vboxdrv]
[   41.047612]  [<ffffffff9f26db43>] ? do_vfs_ioctl+0xa3/0x5f0
[   41.047613]  [<ffffffff9f06280b>] ? __do_page_fault+0x23b/0x4e0
[   41.047614]  [<ffffffff9f26e109>] ? SyS_ioctl+0x79/0x90
[   41.047616]  [<ffffffff9f81bbf7>] ? entry_SYSCALL_64_fastpath+0x1a/0xa9
[   41.047617] Code: 88 d1 fc ff ff b9 3a 00 00 00 0f 32 48 c1 e2 20 89 c0 48
09 d0 48 89 05 d8 4b 0f 00 0f 20 e0 b9 9b 00 00 00 48 89 05 b1 4b 0f 00 <0f> 32
48 c1 e2 20 89 c0 b9 80 00 00 c0 48 09 d0 48 89 05 aa 4b 
[   41.047629] RIP  [<ffffffffc000baa7>] 0xffffffffc000baa7
[   41.047630]  RSP <ffffc0ab427abd58>
[   41.047631] ---[ end trace 2d3de5d7dc5b188a ]---

Is there any news regards supporting other hypervisors except kvm and hyper-v?
Could someone explain why this "support" is needed? I mean since kvm is just
working fine.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux