On Wed, Jun 15, 2022, Chao Gao wrote: > >@@ -5980,6 +5987,8 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event, > > int kvm_vm_ioctl_enable_cap(struct kvm *kvm, > > struct kvm_enable_cap *cap) > > { > >+ struct kvm_vcpu *vcpu; > >+ unsigned long i; > > int r; > > > > if (cap->flags) > >@@ -6036,14 +6045,17 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, > > break; > > > > mutex_lock(&kvm->lock); > >- if (kvm->created_vcpus) > >- goto disable_exits_unlock; > >+ if (kvm->created_vcpus) { > >+ kvm_for_each_vcpu(i, vcpu, kvm) { > >+ kvm_ioctl_disable_exits(vcpu->arch, cap->args[0]); > >+ static_call(kvm_x86_update_disabled_exits)(vcpu); > > IMO, this won't work on Intel platforms. It's not safe on AMD either because at best the behavior is non-deterministic if the vCPU is already running in the guest, and at worst could cause explosions, e.g. if hardware doesn't like software modifying in-use VMCB state. > Because, to manipulate a vCPU's VMCS, vcpu_load() should be invoked in > advance to load the VMCS. Alternatively, you can add a request KVM_REQ_XXX > and defer updating VMCS to the next vCPU entry. Definitely use a request, doing vcpu_load() from a KVM-scoped ioctl() would be a mess as KVM would need to acquire the per-vCPU lock for every vCPU.