On Thu, Jun 24, 2021 at 7:30 PM Christian König <christian.koenig@xxxxxxx> wrote: > > Am 24.06.21 um 16:00 schrieb Daniel Vetter: > > This is a very confusingly named function, because not just does it > > init an object, it arms it and provides a point of no return for > > pushing a job into the scheduler. It would be nice if that's a bit > > clearer in the interface. > > We originally had that in the push_job interface, but moved that to init > for some reason I don't remember. > > > But the real reason is that I want to push the dependency tracking > > helpers into the scheduler code, and that means drm_sched_job_init > > must be called a lot earlier, without arming the job. > > I'm really questioning myself if I like that naming. > > What about using drm_sched_job_add_dependency instead? You're suggesting a s/drm_sched_job_init/drm_sched_job_add_dependency/, or just replied to the wrong patch? -Daniel > > Christian. > > > > > Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> > > Cc: Lucas Stach <l.stach@xxxxxxxxxxxxxx> > > Cc: Russell King <linux+etnaviv@xxxxxxxxxxxxxxx> > > Cc: Christian Gmeiner <christian.gmeiner@xxxxxxxxx> > > Cc: Qiang Yu <yuq825@xxxxxxxxx> > > Cc: Rob Herring <robh@xxxxxxxxxx> > > Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx> > > Cc: Steven Price <steven.price@xxxxxxx> > > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@xxxxxxxxxxxxx> > > Cc: David Airlie <airlied@xxxxxxxx> > > Cc: Daniel Vetter <daniel@xxxxxxxx> > > Cc: Sumit Semwal <sumit.semwal@xxxxxxxxxx> > > Cc: "Christian König" <christian.koenig@xxxxxxx> > > Cc: Masahiro Yamada <masahiroy@xxxxxxxxxx> > > Cc: Kees Cook <keescook@xxxxxxxxxxxx> > > Cc: Adam Borowski <kilobyte@xxxxxxxxxx> > > Cc: Nick Terrell <terrelln@xxxxxx> > > Cc: Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> > > Cc: Paul Menzel <pmenzel@xxxxxxxxxxxxx> > > Cc: Sami Tolvanen <samitolvanen@xxxxxxxxxx> > > Cc: Viresh Kumar <viresh.kumar@xxxxxxxxxx> > > Cc: Alex Deucher <alexander.deucher@xxxxxxx> > > Cc: Dave Airlie <airlied@xxxxxxxxxx> > > Cc: Nirmoy Das <nirmoy.das@xxxxxxx> > > Cc: Deepak R Varma <mh12gx2825@xxxxxxxxx> > > Cc: Lee Jones <lee.jones@xxxxxxxxxx> > > Cc: Kevin Wang <kevin1.wang@xxxxxxx> > > Cc: Chen Li <chenli@xxxxxxxxxxxxx> > > Cc: Luben Tuikov <luben.tuikov@xxxxxxx> > > Cc: "Marek Olšák" <marek.olsak@xxxxxxx> > > Cc: Dennis Li <Dennis.Li@xxxxxxx> > > Cc: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx> > > Cc: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx> > > Cc: Sonny Jiang <sonny.jiang@xxxxxxx> > > Cc: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> > > Cc: Tian Tao <tiantao6@xxxxxxxxxxxxx> > > Cc: Jack Zhang <Jack.Zhang1@xxxxxxx> > > Cc: etnaviv@xxxxxxxxxxxxxxxxxxxxx > > Cc: lima@xxxxxxxxxxxxxxxxxxxxx > > Cc: linux-media@xxxxxxxxxxxxxxx > > Cc: linaro-mm-sig@xxxxxxxxxxxxxxxx > > --- > > .gitignore | 1 + > > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 ++ > > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 ++ > > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 ++ > > drivers/gpu/drm/lima/lima_sched.c | 2 ++ > > drivers/gpu/drm/panfrost/panfrost_job.c | 2 ++ > > drivers/gpu/drm/scheduler/sched_entity.c | 6 +++--- > > drivers/gpu/drm/scheduler/sched_fence.c | 15 ++++++++++----- > > drivers/gpu/drm/scheduler/sched_main.c | 23 ++++++++++++++++++++++- > > include/drm/gpu_scheduler.h | 6 +++++- > > 10 files changed, 51 insertions(+), 10 deletions(-) > > > > diff --git a/.gitignore b/.gitignore > > index 7afd412dadd2..52433a930299 100644 > > --- a/.gitignore > > +++ b/.gitignore > > @@ -66,6 +66,7 @@ modules.order > > /modules.builtin > > /modules.builtin.modinfo > > /modules.nsdeps > > +*.builtin > > > > # > > # RPM spec file (make rpm-pkg) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > index c5386d13eb4a..a4ec092af9a7 100644 > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, > > if (r) > > goto error_unlock; > > > > + drm_sched_job_arm(&job->base); > > + > > /* No memory allocation is allowed while holding the notifier lock. > > * The lock is held until amdgpu_cs_submit is finished and fence is > > * added to BOs. > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > > index d33e6d97cc89..5ddb955d2315 100644 > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > > @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, > > if (r) > > return r; > > > > + drm_sched_job_arm(&job->base); > > + > > *f = dma_fence_get(&job->base.s_fence->finished); > > amdgpu_job_free_resources(job); > > drm_sched_entity_push_job(&job->base, entity); > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c > > index 19826e504efc..af1671f01c7f 100644 > > --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c > > @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity, > > if (ret) > > goto out_unlock; > > > > + drm_sched_job_arm(&submit->sched_job); > > + > > submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished); > > submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr, > > submit->out_fence, 0, > > diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c > > index ecf3267334ff..bd1af1fd8c0f 100644 > > --- a/drivers/gpu/drm/lima/lima_sched.c > > +++ b/drivers/gpu/drm/lima/lima_sched.c > > @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task, > > return err; > > } > > > > + drm_sched_job_arm(&task->base); > > + > > task->num_bos = num_bos; > > task->vm = lima_vm_get(vm); > > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c > > index beb62c8fc851..1e950534b9b0 100644 > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c > > @@ -244,6 +244,8 @@ int panfrost_job_push(struct panfrost_job *job) > > goto unlock; > > } > > > > + drm_sched_job_arm(&job->base); > > + > > job->render_done_fence = dma_fence_get(&job->base.s_fence->finished); > > > > ret = panfrost_acquire_object_fences(job->bos, job->bo_count, > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > > index 79554aa4dbb1..f7347c284886 100644 > > --- a/drivers/gpu/drm/scheduler/sched_entity.c > > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > > @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) > > * @sched_job: job to submit > > * @entity: scheduler entity > > * > > - * Note: To guarantee that the order of insertion to queue matches > > - * the job's fence sequence number this function should be > > - * called with drm_sched_job_init under common lock. > > + * Note: To guarantee that the order of insertion to queue matches the job's > > + * fence sequence number this function should be called with drm_sched_job_arm() > > + * under common lock. > > * > > * Returns 0 for success, negative error code otherwise. > > */ > > diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c > > index 69de2c76731f..0ba810c198bd 100644 > > --- a/drivers/gpu/drm/scheduler/sched_fence.c > > +++ b/drivers/gpu/drm/scheduler/sched_fence.c > > @@ -152,11 +152,10 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) > > } > > EXPORT_SYMBOL(to_drm_sched_fence); > > > > -struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, > > - void *owner) > > +struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, > > + void *owner) > > { > > struct drm_sched_fence *fence = NULL; > > - unsigned seq; > > > > fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL); > > if (fence == NULL) > > @@ -166,13 +165,19 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, > > fence->sched = entity->rq->sched; > > spin_lock_init(&fence->lock); > > > > + return fence; > > +} > > + > > +void drm_sched_fence_init(struct drm_sched_fence *fence, > > + struct drm_sched_entity *entity) > > +{ > > + unsigned seq; > > + > > seq = atomic_inc_return(&entity->fence_seq); > > dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, > > &fence->lock, entity->fence_context, seq); > > dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished, > > &fence->lock, entity->fence_context + 1, seq); > > - > > - return fence; > > } > > > > module_init(drm_sched_fence_slab_init); > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > > index 61420a9c1021..70eefed17e06 100644 > > --- a/drivers/gpu/drm/scheduler/sched_main.c > > +++ b/drivers/gpu/drm/scheduler/sched_main.c > > @@ -48,9 +48,11 @@ > > #include <linux/wait.h> > > #include <linux/sched.h> > > #include <linux/completion.h> > > +#include <linux/dma-resv.h> > > #include <uapi/linux/sched/types.h> > > > > #include <drm/drm_print.h> > > +#include <drm/drm_gem.h> > > #include <drm/gpu_scheduler.h> > > #include <drm/spsc_queue.h> > > > > @@ -594,7 +596,7 @@ int drm_sched_job_init(struct drm_sched_job *job, > > job->sched = sched; > > job->entity = entity; > > job->s_priority = entity->rq - sched->sched_rq; > > - job->s_fence = drm_sched_fence_create(entity, owner); > > + job->s_fence = drm_sched_fence_alloc(entity, owner); > > if (!job->s_fence) > > return -ENOMEM; > > job->id = atomic64_inc_return(&sched->job_id_count); > > @@ -605,6 +607,25 @@ int drm_sched_job_init(struct drm_sched_job *job, > > } > > EXPORT_SYMBOL(drm_sched_job_init); > > > > +/** > > + * drm_sched_job_arm - arm a scheduler job for execution > > + * @job: scheduler job to arm > > + * > > + * This arms a scheduler job for execution. Specifically it initializes the > > + * &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv > > + * or other places that need to track the completion of this job. > > + * > > + * Refer to drm_sched_entity_push_job() documentation for locking > > + * considerations. > > + * > > + * This can only be called if drm_sched_job_init() succeeded. > > + */ > > +void drm_sched_job_arm(struct drm_sched_job *job) > > +{ > > + drm_sched_fence_init(job->s_fence, job->entity); > > +} > > +EXPORT_SYMBOL(drm_sched_job_arm); > > + > > /** > > * drm_sched_job_cleanup - clean up scheduler job resources > > * > > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > > index d18af49fd009..80438d126c9d 100644 > > --- a/include/drm/gpu_scheduler.h > > +++ b/include/drm/gpu_scheduler.h > > @@ -313,6 +313,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched); > > int drm_sched_job_init(struct drm_sched_job *job, > > struct drm_sched_entity *entity, > > void *owner); > > +void drm_sched_job_arm(struct drm_sched_job *job); > > void drm_sched_entity_modify_sched(struct drm_sched_entity *entity, > > struct drm_gpu_scheduler **sched_list, > > unsigned int num_sched_list); > > @@ -352,8 +353,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, > > enum drm_sched_priority priority); > > bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); > > > > -struct drm_sched_fence *drm_sched_fence_create( > > +struct drm_sched_fence *drm_sched_fence_alloc( > > struct drm_sched_entity *s_entity, void *owner); > > +void drm_sched_fence_init(struct drm_sched_fence *fence, > > + struct drm_sched_entity *entity); > > + > > void drm_sched_fence_scheduled(struct drm_sched_fence *fence); > > void drm_sched_fence_finished(struct drm_sched_fence *fence); > > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch