[PATCH] drm/amdgpu: use dep_sync for CS dependency/syncobj

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2017å¹´11æ??13æ?¥ 17:31, Christian König wrote:
> Am 13.11.2017 um 09:05 schrieb Chunming Zhou:
>>
>>
>> On 2017å¹´11æ??13æ?¥ 15:51, Christian König wrote:
>>> Am 13.11.2017 um 03:53 schrieb Chunming Zhou:
>>>> Otherwise, they could be optimized by scheduled fence.
>>>>
>>>> Change-Id: I6857eee20aebeaad793d9fe4e1b5222f1be7470e
>>>> Signed-off-by: Chunming Zhou <david1.zhou at amd.com>
>>>
>>> First of all patch is Reviewed-by: Christian König 
>>> <christian.koenig at amd.com>.
>> Thanks.
>>
>>>
>>> Second do you remember why we did this? I have some brief memory in 
>>> my head that a certain CTS test failed because we didn't completely 
>>> synchronized between dependencies explicit added by a semaphore.
>> Yes, exactly, vulkan cts could fail for some semaphore case, which is 
>> to sync two job with different context but same process and same 
>> engine. Although two jobs are series in hw ring, their executions are 
>> in parallel, which could result in hang.
>
> Ah, now I remember. Yeah the problem is that the two job executions 
> overlap and we need to insert an pipeline sync between them.
>
>>
>>>
>>> Then can we narrow this down into a unit test for libdrm? Probably 
>>> not so easy to reproduce otherwise.
>> Also yes, this is occasional issue, it's not very easy to reproduce.
>
> Yeah, we would need to do something like job 1 writes a value A to 
> memory location X using shaders, then job 2 write to the same location 
> value B using the CP.
>
> Then send both with a semaphore dependency between the two. If 
> everything works like expected we see value B, but if we don't wait 
> for the shaders to finish before running job 2 we see value A.
>
> Do you have time to put all this into a unit tests? I think that would 
> be important to make sure we don't break it again in the future.
>
> Otherwise Andrey can probably take a look.
OK, feel free to assign.

Thanks,
David Zhou
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>
>>> Thanks,
>>> Christian.
>>>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
>>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> index 673fb9f4301e..4a2af571d35f 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> @@ -1078,7 +1078,7 @@ static int amdgpu_cs_process_fence_dep(struct 
>>>> amdgpu_cs_parser *p,
>>>>               amdgpu_ctx_put(ctx);
>>>>               return r;
>>>>           } else if (fence) {
>>>> -            r = amdgpu_sync_fence(p->adev, &p->job->sync,
>>>> +            r = amdgpu_sync_fence(p->adev, &p->job->dep_sync,
>>>>                             fence);
>>>>               dma_fence_put(fence);
>>>>               amdgpu_ctx_put(ctx);
>>>> @@ -1103,7 +1103,7 @@ static int 
>>>> amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p,
>>>>       if (r)
>>>>           return r;
>>>>   -    r = amdgpu_sync_fence(p->adev, &p->job->sync, fence);
>>>> +    r = amdgpu_sync_fence(p->adev, &p->job->dep_sync, fence);
>>>>       dma_fence_put(fence);
>>>>         return r;
>>>
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux