only with amdgpu_vm_bo_rmv() won't has such bug, but in another branch for sriov, we not only call vm_bo_rmv(), and we also set csa_addr to NULL after it, so the NULL address is inserted in RB, and when preemption occured, CP backup snapshot to NULL address. although in staging-4.9 we didn't set csa_addr to NULL (because as you suggested we always use HARDCODE/MACRO for CSA address), but logically we'd better put CSA unmapping stuffs behind "sched_entity_fini", which is more reasonable ... BR Monk ________________________________ å??件人: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> 代表 Christian König <deathsimple at vodafone.de> å??é??æ?¶é?´: 2017å¹´1æ??13æ?¥ 17:25:09 æ?¶ä»¶äºº: Liu, Monk; amd-gfx at lists.freedesktop.org 主é¢?: Re: [PATCH] drm/amdgpu:put CSA unmap after sched_entity_fini Am 13.01.2017 um 05:11 schrieb Monk Liu: > otherwise CSA may unmapped before gpu_scheduler scheduling > jobs and trigger VM fault on CSA address > > Change-Id: Ib2e25ededf89bca44c764477dd2f9127024ca78c > Signed-off-by: Monk Liu <Monk.Liu at amd.com> Did you really run into an issue because of that? Calling amdgpu_vm_bo_rmv() shouldn't affect the page tables nor already submitted command submissions in any way. Regards, Christian. > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 8 -------- > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 ++++++++ > 2 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c > index 45484c0..e13cdde 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c > @@ -694,14 +694,6 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev, > amdgpu_uvd_free_handles(adev, file_priv); > amdgpu_vce_free_handles(adev, file_priv); > > - if (amdgpu_sriov_vf(adev)) { > - /* TODO: how to handle reserve failure */ > - BUG_ON(amdgpu_bo_reserve(adev->virt.csa_obj, false)); > - amdgpu_vm_bo_rmv(adev, fpriv->vm.csa_bo_va); > - fpriv->vm.csa_bo_va = NULL; > - amdgpu_bo_unreserve(adev->virt.csa_obj); > - } > - > amdgpu_vm_fini(adev, &fpriv->vm); > > idr_for_each_entry(&fpriv->bo_list_handles, list, handle) > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > index d05546e..94098bc 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > @@ -1608,6 +1608,14 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm) > > amd_sched_entity_fini(vm->entity.sched, &vm->entity); > > + if (amdgpu_sriov_vf(adev)) { > + /* TODO: how to handle reserve failure */ > + BUG_ON(amdgpu_bo_reserve(adev->virt.csa_obj, false)); > + amdgpu_vm_bo_rmv(adev, vm->csa_bo_va); > + vm->csa_bo_va = NULL; > + amdgpu_bo_unreserve(adev->virt.csa_obj); > + } > + > if (!RB_EMPTY_ROOT(&vm->va)) { > dev_err(adev->dev, "still active bo inside vm\n"); > } _______________________________________________ amd-gfx mailing list amd-gfx at lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20170113/7797fc16/attachment.html>