Adding the BO and it's mapping is VM specific code, we should initialize that directly in amdgpu_vm_init() and not call any helper to delegate the work. Especially don't call from the VM code into the virt code and back into the VM code. If you really want to keep that in amdgpu_virt. an alternative would be to put the call into amdgpu_driver_open_kms() directly after calling amdgpu_vm_init(). Regards, Christian. Am 10.01.2017 um 03:40 schrieb Liu, Monk: > > why you want to drop that "adev->virt.map_csa()" calling ? without > that calling you don't have CSA's initialized bo_va and mappings as > well, and without bo_va and mappings how you can make > "amdgpu_vm_bo_update()" invoke work ? > > > Monk > > ------------------------------------------------------------------------ > *å??件人:* Christian König <deathsimple at vodafone.de> > *å??é??æ?¶é?´:* 2017å¹´1æ??9æ?¥ 19:04:44 > *æ?¶ä»¶äºº:* Liu, Monk; amd-gfx at lists.freedesktop.org > *主é¢?:* Re: [PATCH 7/7] drm/amdgpu:map/unmap static csa accordingly > Am 09.01.2017 um 09:03 schrieb Monk Liu: > > and update CSA bo_va in each submit > > > > Change-Id: I5ed73e1b7f89743d90298bc814a42a91e166be3b > > Signed-off-by: Monk Liu <Monk.Liu at amd.com> > > --- > > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 14 ++++++++++++++ > > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 11 +++++++++++ > > 2 files changed, 25 insertions(+) > > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > index 6159afc..083ab73 100644 > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c > > @@ -771,6 +771,20 @@ static int amdgpu_bo_vm_update_pte(struct > amdgpu_cs_parser *p, > > if (r) > > return r; > > > > + if (amdgpu_sriov_vf(adev)) { > > + struct fence *f; > > A new line is needed between deceleration and code. > > > + bo_va = vm->vm_virt.csa_bo_va; > > + BUG_ON(!bo_va); > > + r = amdgpu_vm_bo_update(adev, bo_va, false); > > + if (r) > > + return r; > > + > > + f = bo_va->last_pt_update; > > + r = amdgpu_sync_fence(adev, &p->job->sync, f); > > + if (r) > > + return r; > > + } > > + > > if (p->bo_list) { > > for (i = 0; i < p->bo_list->num_entries; i++) { > > struct fence *f; > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > > index d05546e..b9cd686 100644 > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > > @@ -1576,6 +1576,14 @@ int amdgpu_vm_init(struct amdgpu_device > *adev, struct amdgpu_vm *vm) > > vm->last_eviction_counter = atomic64_read(&adev->num_evictions); > > amdgpu_bo_unreserve(vm->page_directory); > > > > + if (amdgpu_sriov_vf(adev)) { > > + BUG_ON(!adev->virt.map_csa); > > + BUG_ON(!adev->virt.unmap_csa); > > + r = adev->virt.map_csa(adev, vm); > > + if (r) > > + goto error_free_page_directory; > > + } > > + > > Just completely drop that. Updating the VM on the first command > submission should be sufficient. > > Christian. > > > return 0; > > > > error_free_page_directory: > > @@ -1606,6 +1614,9 @@ void amdgpu_vm_fini(struct amdgpu_device > *adev, struct amdgpu_vm *vm) > > struct amdgpu_bo_va_mapping *mapping, *tmp; > > int i; > > > > + if (amdgpu_sriov_vf(adev)) > > + adev->virt.unmap_csa(adev, vm); > > + > > amd_sched_entity_fini(vm->entity.sched, &vm->entity); > > > > if (!RB_EMPTY_ROOT(&vm->va)) { > >