On Thu, Aug 29, 2013 at 11:06:47AM +0800, chenfan wrote: > On Wed, 2013-08-28 at 11:21 +0300, Gleb Natapov wrote: > > On Wed, Aug 28, 2013 at 04:08:10PM +0800, Chen Fan wrote: > > > If we want to remove a vcpu from vm, we should not be only to > > > put kvm, we need to decrease online vcpus and free struct kvm_vcpu > > > when closing a vcpu file descriptor. > > > > > It much more complicated that what you do here. kvm->vcpus[] is accessed > > without locking and kvm->online_vcpus is assumed to be greater than max > > occupied index in kvm->vcpus[]. Attempt where made to make it possible > > to destroy individual vcpus separately from destroying VM before, but > > they were unsuccessful thus far. > > > Oh, the motivation of this patch is for QEMU for implementing cpu > hot-remove, I want to remove the vcpu from KVM after QEMU closing the > kvm_fd. there I need some advice for this feature. if this patch is not > comprehensive enough,I want to know the reasons for a more detailed. > Could you tell me? > I told you above. > Thanks, > Chen > > > > Signed-off-by: Chen Fan <chen.fan.fnst@xxxxxxxxxxxxxx> > > > --- > > > arch/x86/kvm/x86.c | 14 ++++++++++++++ > > > include/linux/kvm_host.h | 1 + > > > virt/kvm/kvm_main.c | 4 +++- > > > 3 files changed, 18 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > > index d21bce5..3370de1 100644 > > > --- a/arch/x86/kvm/x86.c > > > +++ b/arch/x86/kvm/x86.c > > > @@ -6904,6 +6904,20 @@ static void kvm_free_vcpus(struct kvm *kvm) > > > mutex_unlock(&kvm->lock); > > > } > > > > > > +void kvm_arch_vcpu_release(struct kvm_vcpu *vcpu) > > > +{ > > > + struct kvm *kvm = vcpu->kvm; > > > + > > > + mutex_lock(&kvm->lock); > > > + atomic_dec(&kvm->online_vcpus); > > > + kvm->vcpus[atomic_read(&kvm->online_vcpus)] = NULL; > > > + mutex_unlock(&kvm->lock); > > > + > > > + kvm_clear_async_pf_completion_queue(vcpu); > > > + kvm_unload_vcpu_mmu(vcpu); > > > + kvm_arch_vcpu_free(vcpu); > > > +} > > > + > > > void kvm_arch_sync_events(struct kvm *kvm) > > > { > > > kvm_free_all_assigned_devices(kvm); > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > > index a63d83e..e0811f97 100644 > > > --- a/include/linux/kvm_host.h > > > +++ b/include/linux/kvm_host.h > > > @@ -631,6 +631,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id); > > > int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu); > > > int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu); > > > void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu); > > > +void kvm_arch_vcpu_release(struct kvm_vcpu *vcpu); > > > > > > int kvm_arch_hardware_enable(void *garbage); > > > void kvm_arch_hardware_disable(void *garbage); > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > > index 1580dd4..442014b 100644 > > > --- a/virt/kvm/kvm_main.c > > > +++ b/virt/kvm/kvm_main.c > > > @@ -1873,8 +1873,10 @@ static int kvm_vcpu_mmap(struct file *file, struct vm_area_struct *vma) > > > static int kvm_vcpu_release(struct inode *inode, struct file *filp) > > > { > > > struct kvm_vcpu *vcpu = filp->private_data; > > > + struct kvm *kvm = vcpu->kvm; > > > > > > - kvm_put_kvm(vcpu->kvm); > > > + kvm_arch_vcpu_release(vcpu); > > > + kvm_put_kvm(kvm); > > > return 0; > > > } > > > > > > -- > > > 1.8.1.4 > > > > > > -- > > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > -- > > Gleb. > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html