[PATCH 2/2] drm/scheduler: Remove obsolete spinlock.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2018å¹´05æ??16æ?¥ 03:31, Andrey Grodzovsky wrote:
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
> ---
>   drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>   include/drm/gpu_scheduler.h               | 1 -
>   2 files changed, 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> index 1f1dd70..2569a63 100644
> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct drm_gpu_scheduler *sched,
>   	entity->last_scheduled = NULL;
>   
>   	spin_lock_init(&entity->rq_lock);
> -	spin_lock_init(&entity->queue_lock);
>   	spsc_queue_init(&entity->job_queue);
>   
>   	atomic_set(&entity->fence_seq, 0);
> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
>   
>   	trace_drm_sched_job(sched_job, entity);
>   
> -	spin_lock(&entity->queue_lock);
>   	first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);
>   
> -	spin_unlock(&entity->queue_lock);
Is your spsc safely to be added simultaneously?

Regards,
David Zhou
> -
>   	/* first job wakes up scheduler */
>   	if (first) {
>   		/* Add the entity to the run queue */
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 350a62c..683eb65 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>   	spinlock_t			rq_lock;
>   	struct drm_gpu_scheduler	*sched;
>   
> -	spinlock_t			queue_lock;
>   	struct spsc_queue		job_queue;
>   
>   	atomic_t			fence_seq;



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux