>> >> I tried to reproduce such a problem, and I found L2 (Linux) hangs in >> SeaBIOS, after line "iPXE (http://ipxe.org) ...". It happens with or >> w/o VMCS shadowing (and even without my virtual EPT patches). I didn't >> realize this problem until I updated the L1 kernel to the latest (e.g. >> 3.9.0) from 3.7.0. L0 uses the kvm.git, next branch. It's possible >> that the L1 kernel exposed a bug with the nested virtualization, as we >> saw such cases before. >> > This is probably fixed by 8d76c49e9ffeee839bc0b7a3278a23f99101263e. Try > it please. I don't see the above SeaBIOS hang, however I'm able to consistently reproduce this stack trace when booting L1 guest: ============ .... [ 2.516894] VFS: Cannot open root device "mapper/fedora-root" or unknown-block(0,0): error -6 [ 2.527636] Please append a correct "root=" boot option; here are the available partitions: [ 2.538792] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 2.539716] Pid: 1, comm: swapper/0 Not tainted 3.8.11-200.fc18.x86_64 #1 [ 2.539716] Call Trace: [ 2.539716] [<ffffffff81649c19>] panic+0xc1/0x1d0 [ 2.539716] [<ffffffff81d010e0>] mount_block_root+0x1fa/0x2ac [ 2.539716] [<ffffffff81d011e9>] mount_root+0x57/0x5b [ 2.539716] [<ffffffff81d0132a>] prepare_namespace+0x13d/0x176 [ 2.539716] [<ffffffff81d00e1c>] kernel_init_freeable+0x1cf/0x1da [ 2.539716] [<ffffffff81d00610>] ? do_early_param+0x8c/0x8c [ 2.539716] [<ffffffff81637ca0>] ? rest_init+0x80/0x80 [ 2.539716] [<ffffffff81637cae>] kernel_init+0xe/0xf0 [ 2.539716] [<ffffffff8165bd6c>] ret_from_fork+0x7c/0xb0 [ 2.539716] [<ffffffff81637ca0>] ? rest_init+0x80/0x80 [ 2.539716] Uhhuh. NMI received for unknown reason 30 on CPU 1. [ 2.539716] Do you have a strange power saving mode enabled? [ 2.539716] Dazed and confused, but trying to continue [ 2.539716] Uhhuh. NMI received for unknown reason 20 on CPU 1. ============ Howver, L1 boots just fine. When I try to boot L2, it throws this different stack trace. ============ [176092.303585] lock(&dev->device_lock); [176092.307947] [176092.307947] *** DEADLOCK *** [176092.307947] [176092.314943] 2 locks held by systemd/1: [176092.319283] #0: (misc_mtx){+.+.+.}, at: [<ffffffff814534b8>] misc_open+0x28/0x1d0 [176092.328104] #1: (&wdd->lock){+.+...}, at: [<ffffffff81557f22>] watchdog_start+0x22/0x80 [176092.337532] [176092.337532] stack backtrace: [176092.342661] CPU: 1 PID: 1 Comm: systemd Not tainted 3.10.0-0.rc0.git23.1.fc20.x86_64 #1 [176092.351823] Hardware name: Intel Corporation Shark Bay Client platform/Flathead Creek Crb, BIOS HSWLPTU1.86C.0109.R03.1301282055 01/28/2013 [176092.366101] ffffffff8257d070 ffff880241b1b9c0 ffffffff81719128 ffff880241b1ba00 [176092.374617] ffffffff81714d75 ffff880241b1ba50 ffff880241b80960 ffff880241b80000 [176092.383130] 0000000000000002 0000000000000002 ffff880241b80960 ffff880241b1bac0 [176092.391647] Call Trace: [176092.394514] [<ffffffff81719128>] dump_stack+0x19/0x1b 2m OK ] Re[176092.400430] [<ffffffff81714d75>] print_circular_bug+0x201/0x210 [176092.408898] [<ffffffff810db094>] __lock_acquire+0x17c4/0x1b30 ached target Shu[176092.415602] [<ffffffff81720d7c>] ? _raw_spin_unlock_irq+0x2c/0x50 [176092.424276] [<ffffffff810dbbf2>] lock_acquire+0xa2/0x1f0 tdown. [176092.430489] [<ffffffff8149028d>] ? mei_wd_ops_start+0x2d/0xf0 [176092.438070] [<ffffffff8171d590>] mutex_lock_nested+0x80/0x400 [176092.444772] [<ffffffff8149028d>] ? mei_wd_ops_start+0x2d/0xf0 [176092.451471] [<ffffffff8149028d>] ? mei_wd_ops_start+0x2d/0xf0 [176092.458172] [<ffffffff81557f22>] ? watchdog_start+0x22/0x80 [176092.464678] [<ffffffff81557f22>] ? watchdog_start+0x22/0x80 [176092.471182] [<ffffffff8149028d>] mei_wd_ops_start+0x2d/0xf0 [176092.477687] [<ffffffff81557f5d>] watchdog_start+0x5d/0x80 [176092.483994] [<ffffffff81558168>] watchdog_open+0x88/0xf0 [176092.490214] [<ffffffff81453547>] misc_open+0xb7/0x1d0 [176092.496128] [<ffffffff811e15d2>] chrdev_open+0x92/0x1d0 [176092.502240] [<ffffffff811da57b>] do_dentry_open+0x24b/0x300 [176092.508745] [<ffffffff812e8e7c>] ? security_inode_permission+0x1c/0x30 [176092.516330] [<ffffffff811e1540>] ? cdev_put+0x30/0x30 [176092.522243] [<ffffffff811da670>] finish_open+0x40/0x50 [176092.528256] [<ffffffff811ec139>] do_last+0x4d9/0xe40 [176092.534071] [<ffffffff811ecb53>] path_openat+0xb3/0x530 [176092.540193] [<ffffffff810acc1f>] ? local_clock+0x5f/0x70 [176092.546403] [<ffffffff8101fcf5>] ? native_sched_clock+0x15/0x80 [176092.553301] [<ffffffff810d5d9d>] ? trace_hardirqs_off+0xd/0x10 [176092.560099] [<ffffffff811ed658>] do_filp_open+0x38/0x80 [176092.566211] [<ffffffff81720c77>] ? _raw_spin_unlock+0x27/0x40 [176092.572913] [<ffffffff811fc39f>] ? __alloc_fd+0xaf/0x200 [176092.579123] [<ffffffff811db9a9>] do_sys_open+0xe9/0x1c0 [176092.585235] [<ffffffff811dba9e>] SyS_open+0x1e/0x20 [176092.590953] [<ffffffff8172a999>] system_call_fastpath+0x16/0x1b Sending SIGTERM to remaining processes... [176092.622745] systemd-journald[338]: Received SIGTERM Sending SIGKILL to remaining processes... Hardware watchdog 'INTCAMT', version 0 Unmounting file systems. Unmounting /sys/kernel/config. Unmounting /dev/mqueue. Unmounting /dev/hugepages. Unmounting /sys/kernel/debug. [176094.363845] EXT4-fs (dm-1): re-mounted. Opts: (null) [176094.548631] EXT4-fs (dm-1): re-mounted. Opts: (null) [176094.554450] EXT4-fs (dm-1): re-mounted. Opts: (null) All filesystems unmounted. Deactivating swaps. All swaps deactivated. Detaching loop devices. All loop devices detached. Detaching DM devices. Detaching DM 253:2. Detaching DM 253:0. Not all DM devices detached, 1 left. Detaching DM devices. Not all DM devices detached, 1 left. Cannot finalize remaining file systems and devices, giving up. Storage is finalized. Successfully changed into root pivot. Returning to initrd... [176094.675812] dracut Warning: Killing all remaining processes ============ L1 Kernel: 3.10.0-0.rc0.git26.1.fc20.x86_64 L2 Kernel: 3.10.0-0.rc0.git26.1.fc20.x86_64 How I re-produced this, I noted it in my previous emails to this thread. Am I doing anything plain incorrect ? Thanks in advance. /kashyap -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html