On Mon, Jun 03, 2024, Babu Moger wrote: > System throws this following UBSAN: invalid-load error when the very first > VM is powered up on a freshly booted host machine. Happens only with 2P or > 4P (multiple sockets) systems. ... > However, VM boots up fine without any issues and operational. > > The error is due to invalid assignment in kvm invalidate range end path. > There is no arch specific handler for this case and handler is assigned > to kvm_null_fn(). This is an empty function and returns void. Return value > of this function is assigned to boolean variable. UBSAN complains about > this incompatible assignment when kernel is compiled with CONFIG_UBSAN. > > Fix the issue by adding a check for the null handler. > > Signed-off-by: Babu Moger <babu.moger@xxxxxxx> > --- > Seems straight forward fix to me. Point me if you think otherwise. New > to this area of the code. First of all not clear to me why handler need > to be called when memory slot is not found in the hva range. > --- > virt/kvm/kvm_main.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 14841acb8b95..ee8be1835214 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -653,7 +653,8 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > if (IS_KVM_NULL_FN(range->handler)) > break; > } > - r.ret |= range->handler(kvm, &gfn_range); > + if (!IS_KVM_NULL_FN(range->handler)) > + r.ret |= range->handler(kvm, &gfn_range); Hrm, this should be unreachable, the IS_KVM_NULL_FN() just about is supposed to bail after locking. Ah, the "break" will only break out of the memslot loop, it won't break out of the address space loop. Stupid SMM. I think this is what we want. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b312d0cbe60b..70f5a39f8302 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -651,7 +651,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, range->on_lock(kvm); if (IS_KVM_NULL_FN(range->handler)) - break; + goto mmu_unlock; } r.ret |= range->handler(kvm, &gfn_range); } @@ -660,6 +660,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, if (range->flush_on_ret && r.ret) kvm_flush_remote_tlbs(kvm); +mmu_unlock: if (r.found_memslot) KVM_MMU_UNLOCK(kvm);