Re: [RFC v3 06/12] drm/amdgpu: Drop hive->in_reset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 1/26/2022 4:07 AM, Andrey Grodzovsky wrote:
Since we serialize all resets no need to protect from concurrent
resets.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx>
Reviewed-by: Christian König <christian.koenig@xxxxxxx>
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 19 +------------------
  drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c   |  1 -
  drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h   |  1 -
  3 files changed, 1 insertion(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 258ec3c0b2af..107a393ebbfd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -5013,25 +5013,9 @@ int amdgpu_device_gpu_recover_imp(struct amdgpu_device *adev,
  	dev_info(adev->dev, "GPU %s begin!\n",
  		need_emergency_restart ? "jobs stop":"reset");
- /*
-	 * Here we trylock to avoid chain of resets executing from
-	 * either trigger by jobs on different adevs in XGMI hive or jobs on
-	 * different schedulers for same device while this TO handler is running.
-	 * We always reset all schedulers for device and all devices for XGMI
-	 * hive so that should take care of them too.
-	 */
  	hive = amdgpu_get_xgmi_hive(adev);
-	if (hive) {
-		if (atomic_cmpxchg(&hive->in_reset, 0, 1) != 0) {
-			DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress",
-				job ? job->base.id : -1, hive->hive_id);
-			amdgpu_put_xgmi_hive(hive);
-			if (job && job->vm)
-				drm_sched_increase_karma(&job->base);
-			return 0;
-		}

This function in general will reset all devices in a hive.

In a situation like GPU0 in hive0 gets to this function first and GPU1 in hive0 also hangs shortly (before GPU0 recovery process starts reseting other devices in hive), we don't want to execute work queued as part of GPU1's recovery also.Both GPU0 and GPU1 recovery process will try to reset all the devices in hive.

In short - if a reset domain is already active, probably we don't need to queue another work to the domain since all devices in the domain are expected to get reset shortly.

Thanks,
Lijo

+	if (hive)
  		mutex_lock(&hive->hive_lock);
-	}
reset_context.method = AMD_RESET_METHOD_NONE;
  	reset_context.reset_req_dev = adev;
@@ -5227,7 +5211,6 @@ int amdgpu_device_gpu_recover_imp(struct amdgpu_device *adev,
skip_recovery:
  	if (hive) {
-		atomic_set(&hive->in_reset, 0);
  		mutex_unlock(&hive->hive_lock);
  		amdgpu_put_xgmi_hive(hive);
  	}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
index a858e3457c5c..9ad742039ac9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
@@ -404,7 +404,6 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev)
  	INIT_LIST_HEAD(&hive->device_list);
  	INIT_LIST_HEAD(&hive->node);
  	mutex_init(&hive->hive_lock);
-	atomic_set(&hive->in_reset, 0);
  	atomic_set(&hive->number_devices, 0);
  	task_barrier_init(&hive->tb);
  	hive->pstate = AMDGPU_XGMI_PSTATE_UNKNOWN;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
index 6121aaa292cb..2f2ce53645a5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
@@ -33,7 +33,6 @@ struct amdgpu_hive_info {
  	struct list_head node;
  	atomic_t number_devices;
  	struct mutex hive_lock;
-	atomic_t in_reset;
  	int hi_req_count;
  	struct amdgpu_device *hi_req_gpu;
  	struct task_barrier tb;




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux