Re: [PATCH 06/13] drm/amdgpu: use the new drm_exec object for CS v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,

> On May 4, 2023, at 20:51, Christian König <ckoenig.leichtzumerken@xxxxxxxxx> wrote:
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 08eced097bd8..9e751f5d4aa7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -882,25 +840,13 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>
> mutex_lock(&p->bo_list->bo_list_mutex);
>
> - /* One for TTM and one for the CS job */
> - amdgpu_bo_list_for_each_entry(e, p->bo_list)
> - e->tv.num_shared = 2;
> -
> - amdgpu_bo_list_get_list(p->bo_list, &p->validated);
> -
> - INIT_LIST_HEAD(&duplicates);
> - amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
> -
> - if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent)
> - list_add(&p->uf_entry.tv.head, &p->validated);
> -
> /* Get userptr backing pages. If pages are updated after registered
> * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do
> * amdgpu_ttm_backend_bind() to flush and invalidate new pages
> */
> amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) {
> - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
> bool userpage_invalidated = false;
> + struct amdgpu_bo *bo = e->bo;
> int i;
>
> e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages,
> @@ -1307,20 +1281,22 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> }
>
> p->fence = dma_fence_get(&leader->base.s_fence->finished);
> - list_for_each_entry(e, &p->validated, tv.head) {
> + drm_exec_for_each_locked_object(&p->exec, index, gobj) {
> +
> + ttm_bo_move_to_lru_tail_unlocked(&gem_to_amdgpu_bo(gobj)->tbo);
>
> /* Everybody except for the gang leader uses READ */
> for (i = 0; i < p->gang_size; ++i) {
> if (p->jobs[i] == leader)
> continue;
>
> - dma_resv_add_fence(e->tv.bo->base.resv,
> + dma_resv_add_fence(gobj->resv,
>   &p->jobs[i]->base.s_fence->finished,
>   DMA_RESV_USAGE_READ);
> }
>
> - /* The gang leader is remembered as writer */
> - e->tv.num_shared = 0;
> + /* The gang leader as remembered as writer */
> + dma_resv_add_fence(gobj->resv, p->fence, DMA_RESV_USAGE_WRITE);
> }
>
> seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_leader_idx],

I believe this changes the usage of VM PDs from READ to WRITE.
Maybe we could check if a BO is PD and supply DMA_RESV_USAGE_READ in that case?

Tatsuyuki




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux