Re: [PATCH] drm/scheduler: fix race condition in load balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 14.01.20 um 17:13 schrieb Nirmoy:

On 1/14/20 5:01 PM, Christian König wrote:
Am 14.01.20 um 16:43 schrieb Nirmoy Das:
Jobs submitted in an entity should execute in the order those jobs
are submitted. We make sure that by checking entity->job_queue in
drm_sched_entity_select_rq() so that we don't loadbalance jobs within
an entity.

But because we update entity->job_queue later in drm_sched_entity_push_job(), there remains a open window when it is possibe that entity->rq might get
updated by drm_sched_entity_select_rq() which should not be allowed.

NAK, concurrent calls to drm_sched_job_init()/drm_sched_entity_push_job() are not allowed in the first place or otherwise we mess up the fence sequence order and risk memory corruption.



Changes in this part also improves job distribution.
Below are test results after running amdgpu_test from mesa drm

Before this patch:

sched_name     num of many times it got scheduled
=========      ==================================
sdma0          314
sdma1          32
comp_1.0.0     56
comp_1.1.0     0
comp_1.1.1     0
comp_1.2.0     0
comp_1.2.1     0
comp_1.3.0     0
comp_1.3.1     0

After this patch:

sched_name     num of many times it got scheduled
=========      ==================================
  sdma1          243
  sdma0          164
  comp_1.0.1     14
  comp_1.1.0     11
  comp_1.1.1     10
  comp_1.2.0     15
  comp_1.2.1     14
  comp_1.3.0     10
  comp_1.3.1     10

Well that is still rather nice to have, why does that happen?

I think it is because we are updating num_jobs immediately after selecting a new rq. Previously  we do that way after

drm_sched_job_init() in drm_sched_entity_push_job(). The problem is if I just do

@@ -562,6 +562,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
          return -ENOENT;
        sched = entity->rq->sched;
+    atomic_inc(&entity->rq->sched->num_jobs);

 @@ -498,7 +504,6 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
      bool first;
        trace_drm_sched_job(sched_job, entity);
-    atomic_inc(&entity->rq->sched->num_jobs);


num_jobs gets negative somewhere down the line somewhere. I am guessing  it's hitting the race condition as I explained in the commit message

The race condition you explain in the commit message should be impossible to hit or we have much much larger problems than just an incorrect job count.

Incrementing num_jobs so early is not possible either cause the job might not get pushed to the entity because of an error.

Christian.



Regards,

Nirmoy


Christian.


Fixes: 35e160e781a048 (drm/scheduler: change entities rq even earlier)

Signed-off-by: Nirmoy Das <nirmoy.das@xxxxxxx>
Reported-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@xxxxxxx>
---
  drivers/gpu/drm/scheduler/sched_entity.c | 9 +++++++--
  drivers/gpu/drm/scheduler/sched_main.c   | 1 +
  include/drm/gpu_scheduler.h              | 1 +
  3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
index 2e3a058fc239..8414e084b6ac 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -67,6 +67,7 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
      entity->priority = priority;
      entity->sched_list = num_sched_list > 1 ? sched_list : NULL;
      entity->last_scheduled = NULL;
+    entity->loadbalance_on = true;
        if(num_sched_list)
          entity->rq = &sched_list[0]->sched_rq[entity->priority];
@@ -447,6 +448,9 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)       entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
        spsc_queue_pop(&entity->job_queue);
+    if (!spsc_queue_count(&entity->job_queue))
+        entity->loadbalance_on = true;
+
      return sched_job;
  }
  @@ -463,7 +467,8 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
      struct dma_fence *fence;
      struct drm_sched_rq *rq;
  -    if (spsc_queue_count(&entity->job_queue) || entity->num_sched_list <= 1)
+    atomic_inc(&entity->rq->sched->num_jobs);
+    if ((entity->num_sched_list <= 1) || !entity->loadbalance_on)
          return;
        fence = READ_ONCE(entity->last_scheduled);
@@ -477,6 +482,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
          entity->rq = rq;
      }
  +    entity->loadbalance_on = false;
      spin_unlock(&entity->rq_lock);
  }
  @@ -498,7 +504,6 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,
      bool first;
        trace_drm_sched_job(sched_job, entity);
-    atomic_inc(&entity->rq->sched->num_jobs);
      WRITE_ONCE(entity->last_user, current->group_leader);
      first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);   diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 3fad5876a13f..00fdc350134e 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -562,6 +562,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
          return -ENOENT;
        sched = entity->rq->sched;
+    atomic_inc(&entity->rq->sched->num_jobs);
        job->sched = sched;
      job->entity = entity;
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 96a1a1b7526e..a5190869d323 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -97,6 +97,7 @@ struct drm_sched_entity {
      struct dma_fence                *last_scheduled;
      struct task_struct        *last_user;
      bool                 stopped;
+    bool                loadbalance_on;
      struct completion        entity_idle;
  };


_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux