Re: [RFC PATCH 3/4] KVM: x86/xen: Disallow gpc locks reinitialization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 16, 2022, Michal Luczaj wrote:
> There are race conditions possible due to kvm_gfn_to_pfn_cache_init()'s
> ability to _re_initialize gfn_to_pfn_cache.lock.
> 
> For example: a race between ioctl(KVM_XEN_HVM_EVTCHN_SEND) and
> kvm_gfn_to_pfn_cache_init() leads to a corrupted shinfo gpc lock.
> 
>                 (thread 1)                |           (thread 2)
>                                           |
>  kvm_xen_set_evtchn_fast                  |
>   read_lock_irqsave(&gpc->lock, ...)      |
>                                           | kvm_gfn_to_pfn_cache_init
>                                           |  rwlock_init(&gpc->lock)
>   read_unlock_irqrestore(&gpc->lock, ...) |
> 

Please explicitly include a sample call stack for reaching kvm_gfn_to_pfn_cache_init().
Without that, it's difficult to understand if this is a bug in the gfn_to_pfn_cache
code, or if it's a bug in the caller.

> Introduce bool locks_initialized.
> 
> Signed-off-by: Michal Luczaj <mhal@xxxxxxx>
> ---
>  include/linux/kvm_types.h | 1 +
>  virt/kvm/pfncache.c       | 7 +++++--
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 3ca3db020e0e..7e7b7667cd9e 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -74,6 +74,7 @@ struct gfn_to_pfn_cache {
>  	void *khva;
>  	kvm_pfn_t pfn;
>  	enum pfn_cache_usage usage;
> +	bool locks_initialized;
>  	bool active;
>  	bool valid;
>  };
> diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
> index 68ff41d39545..564607e10586 100644
> --- a/virt/kvm/pfncache.c
> +++ b/virt/kvm/pfncache.c
> @@ -354,8 +354,11 @@ int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
>  	WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage);
>  
>  	if (!gpc->active) {
> -		rwlock_init(&gpc->lock);
> -		mutex_init(&gpc->refresh_lock);
> +		if (!gpc->locks_initialized) {

Rather than add another flag, move the lock initialization to another helper and
call the new helper from e.g. kvm_xen_init_vm() and kvm_xen_init_vcpu().  There
is zero reason to initialize locks on-demand.  That way, patches 1 and 2 aren't
necessary because gpc->lock is always valid.

And then at the same time, rename "cache_init" and "cache_destroy" to
activate+deactivate to avoid implying that the cache really is destroyed/freed.
And 

Adding a true init() API will also allow for future cleanups.  @kvm, @vcpu, @len,
and @usage all should be immutable in the sense that they are properties of the
cache, i.e. can be moved into init().  The nice side effect of moving most of that
stuff into init() is that it makes it very obvious from the activate() call sites
that the gpa is the only mutable information.

I.e. as additional patches, do:

  1. Formalize "gpc" as the acronym and use it in function names to reduce line
     lengths.  Maybe keep the long name for gfn_to_pfn_cache_invalidate_start()
     though?  Since that is a very different API.

  2. Snapshot @usage during kvm_gpc_init().

  3. Add a KVM backpointer in "struct gfn_to_pfn_cache" and snapshot it during
     initialization.  The extra memory cost is negligible in the grand scheme,
     and not having to constantly provide @kvm makes the call sites prettier, and
     it avoids weirdness where @kvm is mandatory but @vcpu is not.

  4. Add a backpointer for @vcpu too, since again the memory overhead is minor,
     and taking @vcpu in "activate" implies that it's legal to share a cache
     between multiple vCPUs, which is not the case.  And opportunistically add a
     WARN to assert that @vcpu is non-NULL if KVM_GUEST_USES_PFN. 

  5. Snapshot @len during kvm_gcp_init().

so that we end up with something like (completely untested):

	bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
	int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
	void kvm_gpc_unmap(struct gfn_to_pfn_cache *gpc)

	void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
			  enum pfn_cache_usage usage, unsigned long len)
	{
		WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage);
		WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu);

		rwlock_init(&gpc->lock);
		mutex_init(&gpc->refresh_lock);

		gpc->kvm = kvm;
		gpc->vcpu = vcpu;
		gpc->usage = usage;
		gpc->len = len;
	}
	EXPORT_SYMBOL_GPL(kvm_gpc_init);

	int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa)
	{
		if (!gpc->active) {
			gpc->khva = NULL;
			gpc->pfn = KVM_PFN_ERR_FAULT;
			gpc->uhva = KVM_HVA_ERR_BAD;
			gpc->valid = false;
			gpc->active = true;

			spin_lock(&gcp->kvm->gpc_lock);
			list_add(&gpc->list, &gcp->kvm->gpc_list);
			spin_unlock(&gcp->kvm->gpc_lock);
		}
		return kvm_gpc_refresh(gpc, gpa);
	}
	EXPORT_SYMBOL_GPL(kvm_gpc_activate);

	void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc)
	{
		if (gpc->active) {
			spin_lock(&gpc->kvm->gpc_lock);
			list_del(&gpc->list);
			spin_unlock(&gpc->kvm->gpc_lock);

			kvm_gpc_unmap(gpc);
			gpc->active = false;
		}
	}
	EXPORT_SYMBOL_GPL(kvm_gpc_deactivate);

Let me know if yout want to take on the above cleanups, if not I'll add them to
my todo list.

Thanks!



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux