Re: [PATCH 3/3] kvm-s390: streamline memslot handling - rebased

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 02, 2009 at 04:26:11PM +0200, ehrhardt@xxxxxxxxxxxxxxxxxx wrote:
> From: Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx>
> 
> As requested this is a rebased patch on top of the already applied v3
> of the patch series.
> 
> *updates to applied version*
> - ensure the wait_on_bit waiter is notified
> - ensure dropping vcpu all requests while freeing a vcpu
> - kickout only scheduled vcpus (its superfluous and wait might hang forever on
>   not running vcpus)
> - kvm_arch_set_memory_region waits until the bit is consumed by the vcpu
> 
> This patch relocates the variables kvm-s390 uses to track guest mem addr/size.
> As discussed dropping the variables at struct kvm_arch level allows to use the
> common vcpu->request based mechanism to reload guest memory if e.g. changes
> via set_memory_region.
> The kick mechanism introduced in this series is used to ensure running vcpus
> leave guest state to catch the update.
> 
> 
> Signed-off-by: Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx>
> ---
> 
> [diffstat]
>  arch/s390/kvm/kvm-s390.c |   27 ++++++++++++++++++++-------
>  arch/s390/kvm/kvm-s390.h |    7 +++++++
>  virt/kvm/kvm_main.c      |    4 ++++
>  3 files changed, 31 insertions(+), 7 deletions(-)
> 
> Index: kvm/arch/s390/kvm/kvm-s390.c
> ===================================================================
> --- kvm.orig/arch/s390/kvm/kvm-s390.c
> +++ kvm/arch/s390/kvm/kvm-s390.c
> @@ -674,6 +674,12 @@ long kvm_arch_vcpu_ioctl(struct file *fi
>  	return -EINVAL;
>  }
>  
> +static int wait_bit_schedule(void *word)
> +{
> +	schedule();
> +	return 0;
> +}
> +
>  /* Section: memory related */
>  int kvm_arch_set_memory_region(struct kvm *kvm,
>  				struct kvm_userspace_memory_region *mem,
> @@ -681,6 +687,7 @@ int kvm_arch_set_memory_region(struct kv
>  				int user_alloc)
>  {
>  	int i;
> +	struct kvm_vcpu *vcpu;
>  
>  	/* A few sanity checks. We can have exactly one memory slot which has
>  	   to start at guest virtual zero and which has to be located at a
> @@ -706,13 +713,19 @@ int kvm_arch_set_memory_region(struct kv
>  
>  	/* request update of sie control block for all available vcpus */
>  	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> -		if (kvm->vcpus[i]) {
> -			if (test_and_set_bit(KVM_REQ_MMU_RELOAD,
> -						&kvm->vcpus[i]->requests))
> -				continue;
> -			kvm_s390_inject_sigp_stop(kvm->vcpus[i],
> -						  ACTION_VCPUREQUEST_ON_STOP);
> -		}
> +		vcpu = kvm->vcpus[i];
> +		if (!vcpu)
> +			continue;
> +
> +		if (!test_and_set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
> +			continue;
> +
> +		if (vcpu->cpu == -1)
> +			continue;

What happens if the check for cpu == -1 races with kvm_arch_vcpu_put?
This context will wait until the vcpu_put context is scheduled back in
to clear the bit? Is that OK?

> +
> +		kvm_s390_inject_sigp_stop(vcpu, ACTION_VCPUREQUEST_ON_STOP);
> +		wait_on_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD,
> +			    wait_bit_schedule, TASK_UNINTERRUPTIBLE);
>  	}

 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
+       vcpu->cpu = -1;
        save_fp_regs(&vcpu->arch.guest_fpregs);
        save_access_regs(vcpu->arch.guest_acrs);
        restore_fp_regs(&vcpu->arch.host_fpregs);

>  
>  	return 0;
> Index: kvm/arch/s390/kvm/kvm-s390.h
> ===================================================================
> --- kvm.orig/arch/s390/kvm/kvm-s390.h
> +++ kvm/arch/s390/kvm/kvm-s390.h
> @@ -92,6 +92,13 @@ static inline unsigned long kvm_s390_han
>  	if (!vcpu->requests)
>  		return 0;
>  
> +	/* requests that can be handled at all levels */
> +	if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests)) {
> +		smp_mb__after_clear_bit();

Really need that smp_mb__after_clear_bit ? AFAIK test_and_clear_bit
implies a barrier?

> +		wake_up_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD);
> +		kvm_s390_vcpu_set_mem(vcpu);
> +	}
> +
>  	return vcpu->requests;
>  }
>  
> Index: kvm/virt/kvm/kvm_main.c
> ===================================================================
> --- kvm.orig/virt/kvm/kvm_main.c
> +++ kvm/virt/kvm/kvm_main.c
> @@ -1682,6 +1682,10 @@ static int kvm_vcpu_release(struct inode
>  {
>  	struct kvm_vcpu *vcpu = filp->private_data;
>  
> +	clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests);
> +	smp_mb__after_clear_bit();
> +	wake_up_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD);
> +

And this should be generic? Say if other architectures want to make use 
of a similar wait infrastructure. Talk is cheap.

Anyway, yeah, the set request / wait mechanism you implement here is
quite similar to the idea mentioned earlier that could be used for x86.

Just get rid of this explicit KVM_REQ_MMU_RELOAD knowledge in
arch-independent code please (if you want to see this merged).

Later it can all be lifted off to arch independent code.

>  	kvm_put_kvm(vcpu->kvm);
>  	return 0;
>  }
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux