From: Trigger Huang <Trigger.Huang@xxxxxxx> The current dev coredump implementation sometimes cannot fully satisfy customer's requirements due to: 1, dev coredump is under the control of gpu_recovery, thinking about the following application scenarios: 1), Customer may need to do the core dump with gpu_recovery disabled. This can be used for GPU hang debug 1), Customer may need to disable the core dump with gpu_recovery enabled. This can be used for quick GPU recovery, especially for some lightweight hangs that can be processed by soft recovery or per ring reset. 1), Customer may need to enable the core dump with gpu_recovery enabled. This can be used for GPU recovery but record the core dump for further check in stress test or system health check. It seems not easy to support all the scenarios by only using amdgpu_gpu_recovery. 2, When job timeout happened, the dump GPU status will be happened after a lot of operations, like soft_reset. The concern here is that the status is not so close to the real GPU's error status. So we introduced the new solution 1, A new parameter, gpu_coredump, is added to decouple the coredump and gpu reset 2, Do the coredump immediately after a job timeout Trigger Huang (4): drm/amdgpu: Add gpu_coredump parameter drm/amdgpu: Use gpu_coredump to control core dump drm/amdgpu: skip printing vram_lost if needed drm/amdgpu: Do core dump immediately when job tmo drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 + .../gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c | 19 +++--- .../gpu/drm/amd/amdgpu/amdgpu_dev_coredump.h | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 8 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 64 +++++++++++++++++++ 6 files changed, 89 insertions(+), 15 deletions(-) -- 2.34.1