[PATCH] drm/amdgpu: initialize the context reset_counter in amdgpu_ctx_init

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 6, 2016 at 6:28 AM, Marek Olšák <maraeo at gmail.com> wrote:
> Do we need to bump the DRM version for this bug fix?
>

Alternatively, we could just cc stable.  Being that the whole reset
handling is mesa is till kind of up in the air, I'm not sure how
critical it is.

Alex

> Marek
>
>
> On Oct 4, 2016 10:20 AM, "Christian König" <deathsimple at vodafone.de> wrote:
>>
>> Am 04.10.2016 um 09:45 schrieb Nicolai Hähnle:
>>>
>>> From: Nicolai Hähnle <nicolai.haehnle at amd.com>
>>>
>>> Ensure that we really only report a GPU reset if one has happened since
>>> the
>>> creation of the context.
>>>
>>> Signed-off-by: Nicolai Hähnle <nicolai.haehnle at amd.com>
>>
>>
>> Reviewed-by: Christian König <christian.koenig at amd.com>.
>>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 3 +++
>>>   1 file changed, 3 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> index e203e55..a5e2fcb 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
>>> @@ -36,20 +36,23 @@ static int amdgpu_ctx_init(struct amdgpu_device
>>> *adev, struct amdgpu_ctx *ctx)
>>>         spin_lock_init(&ctx->ring_lock);
>>>         ctx->fences = kcalloc(amdgpu_sched_jobs * AMDGPU_MAX_RINGS,
>>>                               sizeof(struct fence*), GFP_KERNEL);
>>>         if (!ctx->fences)
>>>                 return -ENOMEM;
>>>         for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
>>>                 ctx->rings[i].sequence = 1;
>>>                 ctx->rings[i].fences = &ctx->fences[amdgpu_sched_jobs *
>>> i];
>>>         }
>>> +
>>> +       ctx->reset_counter = atomic_read(&adev->gpu_reset_counter);
>>> +
>>>         /* create context entity for each ring */
>>>         for (i = 0; i < adev->num_rings; i++) {
>>>                 struct amdgpu_ring *ring = adev->rings[i];
>>>                 struct amd_sched_rq *rq;
>>>                 rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
>>>                 r = amd_sched_entity_init(&ring->sched,
>>> &ctx->rings[i].entity,
>>>                                           rq, amdgpu_sched_jobs);
>>>                 if (r)
>>>                         break;
>>
>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux