Re: [PATCH] KVM: arm64: Mark the VM as dead for failed initializations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Raghu,

Thanks for posting this fix.

On Fri, Oct 25, 2024 at 10:12:20PM +0000, Raghavendra Rao Ananta wrote:
> Syzbot hit the following WARN_ON() in kvm_timer_update_irq():
> 
> WARNING: CPU: 0 PID: 3281 at arch/arm64/kvm/arch_timer.c:459
> kvm_timer_update_irq+0x21c/0x394
> Call trace:
>   kvm_timer_update_irq+0x21c/0x394 arch/arm64/kvm/arch_timer.c:459
>   kvm_timer_vcpu_reset+0x158/0x684 arch/arm64/kvm/arch_timer.c:968
>   kvm_reset_vcpu+0x3b4/0x560 arch/arm64/kvm/reset.c:264
>   kvm_vcpu_set_target arch/arm64/kvm/arm.c:1553 [inline]
>   kvm_arch_vcpu_ioctl_vcpu_init arch/arm64/kvm/arm.c:1573 [inline]
>   kvm_arch_vcpu_ioctl+0x112c/0x1b3c arch/arm64/kvm/arm.c:1695
>   kvm_vcpu_ioctl+0x4ec/0xf74 virt/kvm/kvm_main.c:4658
>   vfs_ioctl fs/ioctl.c:51 [inline]
>   __do_sys_ioctl fs/ioctl.c:907 [inline]
>   __se_sys_ioctl fs/ioctl.c:893 [inline]
>   __arm64_sys_ioctl+0x108/0x184 fs/ioctl.c:893
>   __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
>   invoke_syscall+0x78/0x1b8 arch/arm64/kernel/syscall.c:49
>   el0_svc_common+0xe8/0x1b0 arch/arm64/kernel/syscall.c:132
>   do_el0_svc+0x40/0x50 arch/arm64/kernel/syscall.c:151
>   el0_svc+0x54/0x14c arch/arm64/kernel/entry-common.c:712
>   el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:730
>   el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598
> 
> The sequence that led to the report is when KVM_ARM_VCPU_INIT ioctl is
> invoked after a failed first KVM_RUN. In a general sense though, since
> kvm_arch_vcpu_run_pid_change() doesn't tear down any of the past
> initiatializations, it's possible that the VM's state could be left

typo: initializations

> half-baked. Any upcoming ioctls could behave erroneously because of
> this.

You may want to highlight a bit more strongly that, despite the name,
we do a lot of late *VM* state initialization in kvm_arch_vcpu_run_pid_change().

When that goes sideways we're left with few choices besides bugging the
VM or gracefully tearing down state, potentially w/ concurrent users.

> Since these late vCPU initializations is past the point of attributing
> the failures to any ioctl, instead of tearing down each of the previous
> setups, simply mark the VM as dead, gving an opportunity for the
> userspace to close and try again.
> 
> Cc: <stable@xxxxxxxxxxxxxxx>
> Reported-by: syzbot <syzkaller@xxxxxxxxxxxxxxxx>
> Suggested-by: Oliver Upton <oliver.upton@xxxxxxxxx>

I definitely recommended this to you, so blame *me* for imposing some
toil on you with the following.

> @@ -836,16 +836,16 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  
>  	ret = kvm_timer_enable(vcpu);
>  	if (ret)
> -		return ret;
> +		goto out_err;
>  
>  	ret = kvm_arm_pmu_v3_enable(vcpu);
>  	if (ret)
> -		return ret;
> +		goto out_err;
>  
>  	if (is_protected_kvm_enabled()) {
>  		ret = pkvm_create_hyp_vm(kvm);
>  		if (ret)
> -			return ret;
> +			goto out_err;
>  	}
>  
>  	if (!irqchip_in_kernel(kvm)) {
> @@ -869,6 +869,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  	mutex_unlock(&kvm->arch.config_lock);
>  
>  	return ret;
> +
> +out_err:
> +	kvm_vm_dead(kvm);
> +	return ret;
>  }

After rereading, I think we could benefit from a more distinct separation
of late VM vs. vCPU state initialization.

Bugging the VM is a big hammer, we should probably only resort to that
when the VM state is screwed up badly.

Otherwise, for screwed up vCPU state we could uninitialize the vCPU and
let userspace try again. An example of this is how we deal with VMs that
run 32 bit userspace when KVM tries to hide the feature.

WDYT?

-- 
Thanks,
Oliver




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux