From: Rob Clark <robdclark@xxxxxxxxxxxx> Until various PM devfreq/QoS and interconnect patches land, we could potentially trigger reclaim from gpu scheduler thread, and under enough memory pressure that could trigger a sort of deadlock. Eventually the wait will timeout and we'll move on to consider other GEM objects. But given that there is still a potential for deadlock/stalling, we should reduce the timeout to contain the damage. Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx> --- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 5a7d48c02c4b..07ca4ddfe4e3 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -75,7 +75,7 @@ static bool wait_for_idle(struct drm_gem_object *obj) { enum dma_resv_usage usage = dma_resv_usage_rw(true); - return dma_resv_wait_timeout(obj->resv, usage, false, 1000) > 0; + return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } static bool -- 2.41.0