[PATCH 2/2] drm/scheduler: Remove obsolete spinlock.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you please elaborate more, maybe give an example - I don't 
understand yet the problematic scenario.

Andrey


On 05/16/2018 02:50 AM, Christian König wrote:
> No, that spinlock is indeed incorrect. I
>
> See even when we protect the spsc queue with a spinlock that doesn't 
> make it correct. It can happen that the jobs pushed to the queue are 
> reversed in their sequence order and that can cause severe problems in 
> the memory management.
>
> Christian.
>
> Am 16.05.2018 um 05:33 schrieb Grodzovsky, Andrey:
>> Yeah, that what I am not sure about... It's lockless in a sense of 
>> single producer single consumer but not for multiple concurrent 
>> producers... So now I think this spinlock should stay there... It 
>> just looked useless to me at first sight...
>>
>> Andrey
>>
>> ________________________________________
>> From: Zhou, David(ChunMing)
>> Sent: 15 May 2018 23:04:44
>> To: Grodzovsky, Andrey; amd-gfx at lists.freedesktop.org; 
>> dri-devel at lists.freedesktop.org
>> Cc: Koenig, Christian
>> Subject: Re: [PATCH 2/2] drm/scheduler: Remove obsolete spinlock.
>>
>>
>>
>> On 2018å¹´05æ??16æ?¥ 03:31, Andrey Grodzovsky wrote:
>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ----
>>>    include/drm/gpu_scheduler.h               | 1 -
>>>    2 files changed, 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c 
>>> b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> index 1f1dd70..2569a63 100644
>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>> @@ -140,7 +140,6 @@ int drm_sched_entity_init(struct 
>>> drm_gpu_scheduler *sched,
>>>        entity->last_scheduled = NULL;
>>>
>>>        spin_lock_init(&entity->rq_lock);
>>> -     spin_lock_init(&entity->queue_lock);
>>>        spsc_queue_init(&entity->job_queue);
>>>
>>>        atomic_set(&entity->fence_seq, 0);
>>> @@ -424,11 +423,8 @@ void drm_sched_entity_push_job(struct 
>>> drm_sched_job *sched_job,
>>>
>>>        trace_drm_sched_job(sched_job, entity);
>>>
>>> -     spin_lock(&entity->queue_lock);
>>>        first = spsc_queue_push(&entity->job_queue, 
>>> &sched_job->queue_node);
>>>
>>> -     spin_unlock(&entity->queue_lock);
>> Is your spsc safely to be added simultaneously?
>>
>> Regards,
>> David Zhou
>>> -
>>>        /* first job wakes up scheduler */
>>>        if (first) {
>>>                /* Add the entity to the run queue */
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index 350a62c..683eb65 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -56,7 +56,6 @@ struct drm_sched_entity {
>>>        spinlock_t                      rq_lock;
>>>        struct drm_gpu_scheduler        *sched;
>>>
>>> -     spinlock_t                      queue_lock;
>>>        struct spsc_queue               job_queue;
>>>
>>>        atomic_t                        fence_seq;
>



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux