On 2016-02-23 10:26, Mike Krinkin wrote: > Hi, > > while playing with toy OS project i've hit the two suspicious ubsan warnings. > The first one: > > [ 168.067027] ================================================================================ > [ 168.067039] UBSAN: Undefined behaviour in arch/x86/kvm/mmu.c:2011:17 > [ 168.067044] index 3 is out of range for type 'kvm_mmu_page *[3]' > [ 168.067051] CPU: 1 PID: 2950 Comm: qemu-system-x86 Tainted: G O L 4.5.0-rc5-next-20160222 #7 > [ 168.067055] Hardware name: LENOVO 23205NG/23205NG, BIOS G2ET95WW (2.55 ) 07/09/2013 > [ 168.067059] 0000000000000000 ffff8801cfcaf438 ffffffff81c9f780 0000000041b58ab3 > [ 168.067068] ffffffff82eb2cc1 ffffffff81c9f6b4 ffff8801cfcaf460 ffff8801cfcaf410 > [ 168.067076] 0000000000000003 0000000000000001 0000000000000000 ffffffffa1981680 > [ 168.067083] Call Trace: > [ 168.067093] [<ffffffff81c9f780>] dump_stack+0xcc/0x12c > [ 168.067100] [<ffffffff81c9f6b4>] ? _atomic_dec_and_lock+0xc4/0xc4 > [ 168.067108] [<ffffffff81da9e81>] ubsan_epilogue+0xd/0x8a > [ 168.067115] [<ffffffff81daafa2>] __ubsan_handle_out_of_bounds+0x15c/0x1a3 > [ 168.067121] [<ffffffff81daae46>] ? __ubsan_handle_shift_out_of_bounds+0x2bd/0x2bd > [ 168.067129] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.067135] [<ffffffff812a0285>] ? __lock_acquire+0x1f35/0x7260 > [ 168.067196] [<ffffffffa1822c27>] mmu_zap_unsync_children+0x567/0x590 [kvm] > [ 168.067249] [<ffffffffa18226c0>] ? mmu_shrink_scan+0x7d0/0x7d0 [kvm] > [ 168.067255] [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350 > [ 168.067261] [<ffffffff8129d4af>] ? mark_held_locks+0xff/0x280 > [ 168.067267] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.067272] [<ffffffff812a01d4>] ? __lock_acquire+0x1e84/0x7260 > [ 168.067279] [<ffffffff8129d4af>] ? mark_held_locks+0xff/0x280 > [ 168.067330] [<ffffffffa1820828>] kvm_mmu_prepare_zap_page+0x118/0x1390 [kvm] > [ 168.067337] [<ffffffff812eb9cb>] ? __synchronize_srcu+0x27b/0x570 > [ 168.067386] [<ffffffffa1820710>] ? kvm_mmu_page_fault+0x620/0x620 [kvm] > [ 168.067434] [<ffffffffa178f650>] ? kvm_vcpu_compat_ioctl+0x460/0x460 [kvm] > [ 168.067485] [<ffffffffa1833152>] kvm_mmu_invalidate_zap_all_pages+0x302/0x650 [kvm] > [ 168.067535] [<ffffffffa17f599e>] kvm_arch_flush_shadow_memslot+0xe/0x10 [kvm] > [ 168.067581] [<ffffffffa179171f>] __kvm_set_memory_region+0x180f/0x27b0 [kvm] > [ 168.067587] [<ffffffff8293cf9e>] ? mutex_lock_nested+0x50e/0x970 > [ 168.067634] [<ffffffffa178ff10>] ? kvm_kvzalloc+0x30/0x30 [kvm] > [ 168.067640] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.067647] [<ffffffff8162accd>] ? __might_fault+0x12d/0x240 > [ 168.067693] [<ffffffffa17926ed>] kvm_set_memory_region+0x2d/0x50 [kvm] > [ 168.067738] [<ffffffffa1793022>] kvm_vm_ioctl+0x912/0x1610 [kvm] > [ 168.067784] [<ffffffffa1792710>] ? kvm_set_memory_region+0x50/0x50 [kvm] > [ 168.067790] [<ffffffff8129ee99>] ? __lock_acquire+0xb49/0x7260 > [ 168.067797] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.067803] [<ffffffff81659828>] ? page_add_new_anon_rmap+0x188/0x2d0 > [ 168.067810] [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350 > [ 168.067816] [<ffffffff82946087>] ? _raw_spin_unlock+0x27/0x40 > [ 168.067821] [<ffffffff8163a943>] ? handle_mm_fault+0x1673/0x2e40 > [ 168.067828] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.067834] [<ffffffff81725a80>] do_vfs_ioctl+0x1b0/0x12b0 > [ 168.067840] [<ffffffff817258d0>] ? ioctl_preallocate+0x210/0x210 > [ 168.067847] [<ffffffff8174aef3>] ? __fget+0x273/0x4a0 > [ 168.067852] [<ffffffff8174acd0>] ? __fget+0x50/0x4a0 > [ 168.067858] [<ffffffff8174b1f6>] ? __fget_light+0x96/0x2b0 > [ 168.067864] [<ffffffff81726bf9>] SyS_ioctl+0x79/0x90 > [ 168.067870] [<ffffffff82946880>] entry_SYSCALL_64_fastpath+0x23/0xc1 > [ 168.067875] ================================================================================ > > I'm not familiar with the code (and so i don't send a patch), but, AFAICS, > except the assignment in kvm_mmu_pages_init, which cases the warning above, > there is only one assignment to struct mmu_page_path parent field in the > mmu_pages_next which doesn't produce warnings, so i applied the following diff > and the warning gone: > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 95a955d..cc673f1 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2008,7 +2008,7 @@ static void kvm_mmu_pages_init(struct kvm_mmu_page *parent, > struct mmu_page_path *parents, > struct kvm_mmu_pages *pvec) > { > - parents->parent[parent->role.level-1] = NULL; > + parents->parent[parent->role.level-2] = NULL; > pvec->nr = 0; > } > > > The second one is: > > [ 168.791851] ================================================================================ > [ 168.791862] UBSAN: Undefined behaviour in arch/x86/kvm/paging_tmpl.h:252:15 > [ 168.791866] index 4 is out of range for type 'u64 [4]' > [ 168.791871] CPU: 0 PID: 2950 Comm: qemu-system-x86 Tainted: G O L 4.5.0-rc5-next-20160222 #7 > [ 168.791873] Hardware name: LENOVO 23205NG/23205NG, BIOS G2ET95WW (2.55 ) 07/09/2013 > [ 168.791876] 0000000000000000 ffff8801cfcaf208 ffffffff81c9f780 0000000041b58ab3 > [ 168.791882] ffffffff82eb2cc1 ffffffff81c9f6b4 ffff8801cfcaf230 ffff8801cfcaf1e0 > [ 168.791886] 0000000000000004 0000000000000001 0000000000000000 ffffffffa1981600 > [ 168.791891] Call Trace: > [ 168.791899] [<ffffffff81c9f780>] dump_stack+0xcc/0x12c > [ 168.791904] [<ffffffff81c9f6b4>] ? _atomic_dec_and_lock+0xc4/0xc4 > [ 168.791910] [<ffffffff81da9e81>] ubsan_epilogue+0xd/0x8a > [ 168.791914] [<ffffffff81daafa2>] __ubsan_handle_out_of_bounds+0x15c/0x1a3 > [ 168.791918] [<ffffffff81daae46>] ? __ubsan_handle_shift_out_of_bounds+0x2bd/0x2bd > [ 168.791922] [<ffffffff811287ef>] ? get_user_pages_fast+0x2bf/0x360 > [ 168.791954] [<ffffffffa1794050>] ? kvm_largepages_enabled+0x30/0x30 [kvm] > [ 168.791958] [<ffffffff81128530>] ? __get_user_pages_fast+0x360/0x360 > [ 168.791987] [<ffffffffa181b818>] paging64_walk_addr_generic+0x1b28/0x2600 [kvm] > [ 168.792014] [<ffffffffa1819cf0>] ? init_kvm_mmu+0x1100/0x1100 [kvm] > [ 168.792019] [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350 > [ 168.792044] [<ffffffffa1819cf0>] ? init_kvm_mmu+0x1100/0x1100 [kvm] > [ 168.792076] [<ffffffffa181c36d>] paging64_gva_to_gpa+0x7d/0x110 [kvm] > [ 168.792121] [<ffffffffa181c2f0>] ? paging64_walk_addr_generic+0x2600/0x2600 [kvm] > [ 168.792130] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.792178] [<ffffffffa17d9a4a>] emulator_read_write_onepage+0x27a/0x1150 [kvm] > [ 168.792208] [<ffffffffa1794d44>] ? __kvm_read_guest_page+0x54/0x70 [kvm] > [ 168.792234] [<ffffffffa17d97d0>] ? kvm_task_switch+0x160/0x160 [kvm] > [ 168.792238] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.792263] [<ffffffffa17daa07>] emulator_read_write+0xe7/0x6d0 [kvm] > [ 168.792290] [<ffffffffa183b620>] ? em_cr_write+0x230/0x230 [kvm] > [ 168.792314] [<ffffffffa17db005>] emulator_write_emulated+0x15/0x20 [kvm] > [ 168.792340] [<ffffffffa18465f8>] segmented_write+0xf8/0x130 [kvm] > [ 168.792367] [<ffffffffa1846500>] ? em_lgdt+0x20/0x20 [kvm] > [ 168.792374] [<ffffffffa14db512>] ? vmx_read_guest_seg_ar+0x42/0x1e0 [kvm_intel] > [ 168.792400] [<ffffffffa1846d82>] writeback+0x3f2/0x700 [kvm] > [ 168.792424] [<ffffffffa1846990>] ? em_sidt+0xa0/0xa0 [kvm] > [ 168.792449] [<ffffffffa185554d>] ? x86_decode_insn+0x1b3d/0x4f70 [kvm] > [ 168.792474] [<ffffffffa1859032>] x86_emulate_insn+0x572/0x3010 [kvm] > [ 168.792499] [<ffffffffa17e71dd>] x86_emulate_instruction+0x3bd/0x2110 [kvm] > [ 168.792524] [<ffffffffa17e6e20>] ? reexecute_instruction.part.110+0x2e0/0x2e0 [kvm] > [ 168.792532] [<ffffffffa14e9a81>] handle_ept_misconfig+0x61/0x460 [kvm_intel] > [ 168.792539] [<ffffffffa14e9a20>] ? handle_pause+0x450/0x450 [kvm_intel] > [ 168.792546] [<ffffffffa15130ea>] vmx_handle_exit+0xd6a/0x1ad0 [kvm_intel] > [ 168.792572] [<ffffffffa17f6a6c>] ? kvm_arch_vcpu_ioctl_run+0xbdc/0x6090 [kvm] > [ 168.792597] [<ffffffffa17f6bcd>] kvm_arch_vcpu_ioctl_run+0xd3d/0x6090 [kvm] > [ 168.792621] [<ffffffffa17f6a6c>] ? kvm_arch_vcpu_ioctl_run+0xbdc/0x6090 [kvm] > [ 168.792627] [<ffffffff8293b530>] ? __ww_mutex_lock_interruptible+0x1630/0x1630 > [ 168.792651] [<ffffffffa17f5e90>] ? kvm_arch_vcpu_runnable+0x4f0/0x4f0 [kvm] > [ 168.792656] [<ffffffff811eeb30>] ? preempt_notifier_unregister+0x190/0x190 > [ 168.792681] [<ffffffffa17e0447>] ? kvm_arch_vcpu_load+0x127/0x650 [kvm] > [ 168.792704] [<ffffffffa178e9a3>] kvm_vcpu_ioctl+0x553/0xda0 [kvm] > [ 168.792727] [<ffffffffa178e450>] ? vcpu_put+0x40/0x40 [kvm] > [ 168.792732] [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350 > [ 168.792735] [<ffffffff82946087>] ? _raw_spin_unlock+0x27/0x40 > [ 168.792740] [<ffffffff8163a943>] ? handle_mm_fault+0x1673/0x2e40 > [ 168.792744] [<ffffffff8129daa8>] ? trace_hardirqs_on_caller+0x478/0x6c0 > [ 168.792747] [<ffffffff8129dcfd>] ? trace_hardirqs_on+0xd/0x10 > [ 168.792751] [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90 > [ 168.792756] [<ffffffff81725a80>] do_vfs_ioctl+0x1b0/0x12b0 > [ 168.792759] [<ffffffff817258d0>] ? ioctl_preallocate+0x210/0x210 > [ 168.792763] [<ffffffff8174aef3>] ? __fget+0x273/0x4a0 > [ 168.792766] [<ffffffff8174acd0>] ? __fget+0x50/0x4a0 > [ 168.792770] [<ffffffff8174b1f6>] ? __fget_light+0x96/0x2b0 > [ 168.792773] [<ffffffff81726bf9>] SyS_ioctl+0x79/0x90 > [ 168.792777] [<ffffffff82946880>] entry_SYSCALL_64_fastpath+0x23/0xc1 > [ 168.792780] ================================================================================ > > the following fix solved problem for me: > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 6c9fed9..2ce4f05 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -249,7 +249,7 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, > return ret; > > kvm_vcpu_mark_page_dirty(vcpu, table_gfn); > - walker->ptes[level] = pte; > + walker->ptes[level - 1] = pte; > } > return 0; > } CC'ing some KVM people with MMU background. Jan -- Siemens AG, Corporate Technology, CT RDA ITP SES-DE Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html