Re: [PATCH v4 5/5] drm/qxl: properly free qxl releases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 26, 2021 at 05:58:12PM +0100, Gerd Hoffmann wrote:
> Signed-off-by: Gerd Hoffmann <kraxel@xxxxxxxxxx>
> ---
>  drivers/gpu/drm/qxl/qxl_drv.h     |  1 +
>  drivers/gpu/drm/qxl/qxl_kms.c     | 22 ++++++++++++++++++++--
>  drivers/gpu/drm/qxl/qxl_release.c |  2 ++
>  3 files changed, 23 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 01354b43c413..1c57b587b6a7 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -214,6 +214,7 @@ struct qxl_device {
>  	spinlock_t	release_lock;
>  	struct idr	release_idr;
>  	uint32_t	release_seqno;
> +	atomic_t	release_count;
>  	spinlock_t release_idr_lock;
>  	struct mutex	async_io_mutex;
>  	unsigned int last_sent_io_cmd;
> diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
> index 4a60a52ab62e..f177f72bfc12 100644
> --- a/drivers/gpu/drm/qxl/qxl_kms.c
> +++ b/drivers/gpu/drm/qxl/qxl_kms.c
> @@ -25,6 +25,7 @@
>  
>  #include <linux/io-mapping.h>
>  #include <linux/pci.h>
> +#include <linux/delay.h>
>  
>  #include <drm/drm_drv.h>
>  #include <drm/drm_managed.h>
> @@ -286,8 +287,25 @@ int qxl_device_init(struct qxl_device *qdev,
>  
>  void qxl_device_fini(struct qxl_device *qdev)
>  {
> -	qxl_bo_unref(&qdev->current_release_bo[0]);
> -	qxl_bo_unref(&qdev->current_release_bo[1]);
> +	int cur_idx, try;
> +
> +	for (cur_idx = 0; cur_idx < 3; cur_idx++) {
> +		if (!qdev->current_release_bo[cur_idx])
> +			continue;
> +		qxl_bo_unpin(qdev->current_release_bo[cur_idx]);
> +		qxl_bo_unref(&qdev->current_release_bo[cur_idx]);
> +		qdev->current_release_bo_offset[cur_idx] = 0;
> +		qdev->current_release_bo[cur_idx] = NULL;
> +	}
> +
> +	/*
> +	 * Ask host to release resources (+fill release ring),
> +	 * then wait for the release actually happening.
> +	 */
> +	qxl_io_notify_oom(qdev);
> +	for (try = 0; try < 20 && atomic_read(&qdev->release_count) > 0; try++)
> +		msleep(20);

A bit icky, why not use a wait queue or something like that instead of
hand-rolling this? Not for perf reasons, just so it's a bit clear who
waits for whom and why.
-Daniel

> +
>  	qxl_gem_fini(qdev);
>  	qxl_bo_fini(qdev);
>  	flush_work(&qdev->gc_work);
> diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c
> index 28013fd1f8ea..43a5436853b7 100644
> --- a/drivers/gpu/drm/qxl/qxl_release.c
> +++ b/drivers/gpu/drm/qxl/qxl_release.c
> @@ -196,6 +196,7 @@ qxl_release_free(struct qxl_device *qdev,
>  		qxl_release_free_list(release);
>  		kfree(release);
>  	}
> +	atomic_dec(&qdev->release_count);
>  }
>  
>  static int qxl_release_bo_alloc(struct qxl_device *qdev,
> @@ -344,6 +345,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
>  			*rbo = NULL;
>  		return idr_ret;
>  	}
> +	atomic_inc(&qdev->release_count);
>  
>  	mutex_lock(&qdev->release_mutex);
>  	if (qdev->current_release_bo_offset[cur_idx] + 1 >= releases_per_bo[cur_idx]) {
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux