Am 18.02.21 um 16:05 schrieb Andrey Grodzovsky:
On 2/18/21 3:07 AM, Christian König wrote:
Am 17.02.21 um 22:59 schrieb Andrey Grodzovsky:
Problem: If scheduler is already stopped by the time sched_entity
is released and entity's job_queue not empty I encountred
a hang in drm_sched_entity_flush. This is because
drm_sched_entity_is_idle
never becomes false.
Fix: In drm_sched_fini detach all sched_entities from the
scheduler's run queues. This will satisfy drm_sched_entity_is_idle.
Also wakeup all those processes stuck in sched_entity flushing
as the scheduler main thread which wakes them up is stopped by now.
v2:
Reverse order of drm_sched_rq_remove_entity and marking
s_entity as stopped to prevent reinserion back to rq due
to race.
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx>
---
drivers/gpu/drm/scheduler/sched_main.c | 31
+++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index 908b0b5..c6b7947 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -897,9 +897,40 @@ EXPORT_SYMBOL(drm_sched_init);
*/
void drm_sched_fini(struct drm_gpu_scheduler *sched)
{
+ int i;
+ struct drm_sched_entity *s_entity;
BTW: Please order that so that i is declared last.
if (sched->thread)
kthread_stop(sched->thread);
+ /* Detach all sched_entites from this scheduler once it's
stopped */
+ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >=
DRM_SCHED_PRIORITY_MIN; i--) {
+ struct drm_sched_rq *rq = &sched->sched_rq[i];
+
+ if (!rq)
+ continue;
+
+ /* Loop this way because rq->lock is taken in
drm_sched_rq_remove_entity */
+ spin_lock(&rq->lock);
+ while ((s_entity = list_first_entry_or_null(&rq->entities,
+ struct drm_sched_entity,
+ list))) {
+ spin_unlock(&rq->lock);
+
+ /* Prevent reinsertion and remove */
+ spin_lock(&s_entity->rq_lock);
+ s_entity->stopped = true;
+ drm_sched_rq_remove_entity(rq, s_entity);
+ spin_unlock(&s_entity->rq_lock);
Well this spin_unlock/lock dance here doesn't look correct at all now.
Christian.
In what way ? It's in the same same order as in other call sites (see
drm_sched_entity_push_job and drm_sched_entity_flush).
If i just locked rq->lock and did list_for_each_entry_safe while
manually deleting entity->list instead of calling
drm_sched_rq_remove_entity this still would not be possible as the
order of lock acquisition between s_entity->rq_lock
and rq->lock would be reverse compared to the call sites mentioned above.
Ah, now I understand. You need this because drm_sched_rq_remove_entity()
will grab the rq lock again!
Problem is now what prevents the entity from being destroyed while you
remove it?
Christian.
Andrey
+
+ spin_lock(&rq->lock);
+ }
+ spin_unlock(&rq->lock);
+
+ }
+
+ /* Wakeup everyone stuck in drm_sched_entity_flush for this
scheduler */
+ wake_up_all(&sched->job_scheduled);
+
/* Confirm no work left behind accessing device structures */
cancel_delayed_work_sync(&sched->work_tdr);
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx