Am 04.10.2016 um 09:45 schrieb Nicolai Hähnle: > From: Nicolai Hähnle <nicolai.haehnle at amd.com> > > Ensure that we really only report a GPU reset if one has happened since the > creation of the context. > > Signed-off-by: Nicolai Hähnle <nicolai.haehnle at amd.com> Reviewed-by: Christian König <christian.koenig at amd.com>. > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > index e203e55..a5e2fcb 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > @@ -36,20 +36,23 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, struct amdgpu_ctx *ctx) > spin_lock_init(&ctx->ring_lock); > ctx->fences = kcalloc(amdgpu_sched_jobs * AMDGPU_MAX_RINGS, > sizeof(struct fence*), GFP_KERNEL); > if (!ctx->fences) > return -ENOMEM; > > for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { > ctx->rings[i].sequence = 1; > ctx->rings[i].fences = &ctx->fences[amdgpu_sched_jobs * i]; > } > + > + ctx->reset_counter = atomic_read(&adev->gpu_reset_counter); > + > /* create context entity for each ring */ > for (i = 0; i < adev->num_rings; i++) { > struct amdgpu_ring *ring = adev->rings[i]; > struct amd_sched_rq *rq; > > rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL]; > r = amd_sched_entity_init(&ring->sched, &ctx->rings[i].entity, > rq, amdgpu_sched_jobs); > if (r) > break;