[AMD Official Use Only - Internal Distribution Only]
I still hit page fault with option 1 while running oclperf test.
Looks like we need sync fence after commit.
From: Tao, Yintian <Yintian.Tao@xxxxxxx>
Sent: Monday, March 16, 2020 4:15:01 PM
To: Pan, Xinhui <Xinhui.Pan@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Kuehling, Felix <Felix.Kuehling@xxxxxxx>; Pan, Xinhui <Xinhui.Pan@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
Subject: RE: [PATCH v4 2/2] drm/amdgpu: unref pt bo after job submit
Sent: Monday, March 16, 2020 4:15:01 PM
To: Pan, Xinhui <Xinhui.Pan@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Kuehling, Felix <Felix.Kuehling@xxxxxxx>; Pan, Xinhui <Xinhui.Pan@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
Subject: RE: [PATCH v4 2/2] drm/amdgpu: unref pt bo after job submit
Hi Xinhui
I encounter the same problem(page fault) when test vk_example benchmark.
I use your first option which can fix the problem. Can you help submit one patch?
- if (flags & AMDGPU_PTE_VALID) {
- struct amdgpu_bo *root = vm->root.base.bo;
- if (!dma_fence_is_signaled(vm->last_direct))
- amdgpu_bo_fence(root, vm->last_direct, true);
+ if (!dma_fence_is_signaled(vm->last_direct))
+ amdgpu_bo_fence(root, vm->last_direct, true);
- if (!dma_fence_is_signaled(vm->last_delayed))
- amdgpu_bo_fence(root, vm->last_delayed, true);
- }
+ if (!dma_fence_is_signaled(vm->last_delayed))
+ amdgpu_bo_fence(root, vm->last_delayed, true);
Best Regards
Yintian Tao
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Pan, Xinhui
Sent: 2020年3月14日 21:07
To: Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Kuehling, Felix <Felix.Kuehling@xxxxxxx>; Pan, Xinhui <Xinhui.Pan@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH v4 2/2] drm/amdgpu: unref pt bo after job submit
hi, All
I think I found the root cause. here is what happened.
user: alloc/mapping memory
kernel: validate memory and update the bo mapping, and update the page table
-> amdgpu_vm_bo_update_mapping
-> amdgpu_vm_update_ptes
-> amdgpu_vm_alloc_pts
-> amdgpu_vm_clear_bo // it will submit a job and we have a fence. BUT it is NOT added in resv.
user: free/unmapping memory
kernel: unmapping mmeory and udpate the page table
-> amdgpu_vm_bo_update_mapping
sync last_delay fence if flag & AMDGPU_PTE_VALID // of source we did not sync it here, as this is unmapping.
-> amdgpu_vm_update_ptes
-> amdgpu_vm_free_pts // unref page table bo.
So from the sequence above, we know there is a race betwen bo releasing and bo clearing.
bo might have been released before job running.
we can fix it in several ways,
1) sync last_delay in both mapping and unmapping case.
Chris, you just sync last_delay in mapping case, should it be ok to sync it also in unmapping case?
2) always add fence to resv after commit.
this is done by patchset v4. And only need patch 1. no need to move unref bo after commit.
3) move unref bo after commit, and add the last delay fence to resv.
This is done by patchset V1.
any ideas?
thanks
xinhui
> 2020年3月14日 02:05,Koenig, Christian <Christian.Koenig@xxxxxxx> 写道:
>
> The page table is not updated and then freed. A higher level PDE is updated and because of this the lower level page tables is freed.
>
> Without this it could be that the memory backing the freed page table is reused while the PDE is still pointing to it.
>
> Rather unlikely that this causes problems, but better save than sorry.
>
> Regards,
> Christian.
>
> Am 13.03.20 um 18:36 schrieb Felix Kuehling:
>> This seems weird. This means that we update a page table, and then free it in the same amdgpu_vm_update_ptes call? That means the update is redundant. Can we eliminate the redundant PTE update if the page table is about to be freed anyway?
>>
>> Regards,
>> Felix
>>
>> On 2020-03-13 12:09, xinhui pan wrote:
>>> Free page table bo before job submit is insane.
>>> We might touch invalid memory while job is runnig.
>>>
>>> we now have individualized bo resv during bo releasing.
>>> So any fences added to root PT bo is actually untested when a normal
>>> PT bo is releasing.
>>>
>>> We might hit gmc page fault or memory just got overwrited.
>>>
>>> Cc: Christian König <christian.koenig@xxxxxxx>
>>> Cc: Alex Deucher <alexander.deucher@xxxxxxx>
>>> Cc: Felix Kuehling <Felix.Kuehling@xxxxxxx>
>>> Signed-off-by: xinhui pan <xinhui.pan@xxxxxxx>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 24 +++++++++++++++++++++---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
>>> 2 files changed, 24 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 73398831196f..346e2f753474 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -937,6 +937,21 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>> return r;
>>> }
>>> +static void amdgpu_vm_free_zombie_bo(struct amdgpu_device *adev,
>>> + struct amdgpu_vm *vm)
>>> +{
>>> + struct amdgpu_vm_pt *entry;
>>> +
>>> + while (!list_empty(&vm->zombies)) {
>>> + entry = list_first_entry(&vm->zombies, struct amdgpu_vm_pt,
>>> + base.vm_status);
>>> + list_del(&entry->base.vm_status);
>>> +
>>> + amdgpu_bo_unref(&entry->base.bo->shadow);
>>> + amdgpu_bo_unref(&entry->base.bo);
>>> + }
>>> +}
>>> +
>>> /**
>>> * amdgpu_vm_free_table - fre one PD/PT
>>> *
>>> @@ -945,10 +960,9 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>> static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>>> {
>>> if (entry->base.bo) {
>>> + list_move(&entry->base.vm_status,
>>> + &entry->base.bo->vm_bo->vm->zombies);
>>> entry->base.bo->vm_bo = NULL;
>>> - list_del(&entry->base.vm_status);
>>> - amdgpu_bo_unref(&entry->base.bo->shadow);
>>> - amdgpu_bo_unref(&entry->base.bo);
>>> }
>>> kvfree(entry->entries);
>>> entry->entries = NULL;
>>> @@ -1624,6 +1638,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>> r = vm->update_funcs->commit(¶ms, fence);
>>> error_unlock:
>>> + amdgpu_vm_free_zombie_bo(adev, vm);
>>> amdgpu_vm_eviction_unlock(vm);
>>> return r;
>>> }
>>> @@ -2807,6 +2822,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> INIT_LIST_HEAD(&vm->invalidated);
>>> spin_lock_init(&vm->invalidated_lock);
>>> INIT_LIST_HEAD(&vm->freed);
>>> + INIT_LIST_HEAD(&vm->zombies);
>>> /* create scheduler entities for page table updates */ @@
>>> -3119,6 +3135,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>> }
>>> amdgpu_vm_free_pts(adev, vm, NULL);
>>> + amdgpu_vm_free_zombie_bo(adev, vm);
>>> +
>>> amdgpu_bo_unreserve(root);
>>> amdgpu_bo_unref(&root);
>>> WARN_ON(vm->root.base.bo);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index b5705fcfc935..9baf44fa16f0 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -269,6 +269,9 @@ struct amdgpu_vm {
>>> /* BO mappings freed, but not yet updated in the PT */
>>> struct list_head freed;
>>> + /* BO will be freed soon */
>>> + struct list_head zombies;
>>> +
>>> /* contains the page directory */
>>> struct amdgpu_vm_pt root;
>>> struct dma_fence *last_update;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url="">
I encounter the same problem(page fault) when test vk_example benchmark.
I use your first option which can fix the problem. Can you help submit one patch?
- if (flags & AMDGPU_PTE_VALID) {
- struct amdgpu_bo *root = vm->root.base.bo;
- if (!dma_fence_is_signaled(vm->last_direct))
- amdgpu_bo_fence(root, vm->last_direct, true);
+ if (!dma_fence_is_signaled(vm->last_direct))
+ amdgpu_bo_fence(root, vm->last_direct, true);
- if (!dma_fence_is_signaled(vm->last_delayed))
- amdgpu_bo_fence(root, vm->last_delayed, true);
- }
+ if (!dma_fence_is_signaled(vm->last_delayed))
+ amdgpu_bo_fence(root, vm->last_delayed, true);
Best Regards
Yintian Tao
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Pan, Xinhui
Sent: 2020年3月14日 21:07
To: Koenig, Christian <Christian.Koenig@xxxxxxx>
Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Kuehling, Felix <Felix.Kuehling@xxxxxxx>; Pan, Xinhui <Xinhui.Pan@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH v4 2/2] drm/amdgpu: unref pt bo after job submit
hi, All
I think I found the root cause. here is what happened.
user: alloc/mapping memory
kernel: validate memory and update the bo mapping, and update the page table
-> amdgpu_vm_bo_update_mapping
-> amdgpu_vm_update_ptes
-> amdgpu_vm_alloc_pts
-> amdgpu_vm_clear_bo // it will submit a job and we have a fence. BUT it is NOT added in resv.
user: free/unmapping memory
kernel: unmapping mmeory and udpate the page table
-> amdgpu_vm_bo_update_mapping
sync last_delay fence if flag & AMDGPU_PTE_VALID // of source we did not sync it here, as this is unmapping.
-> amdgpu_vm_update_ptes
-> amdgpu_vm_free_pts // unref page table bo.
So from the sequence above, we know there is a race betwen bo releasing and bo clearing.
bo might have been released before job running.
we can fix it in several ways,
1) sync last_delay in both mapping and unmapping case.
Chris, you just sync last_delay in mapping case, should it be ok to sync it also in unmapping case?
2) always add fence to resv after commit.
this is done by patchset v4. And only need patch 1. no need to move unref bo after commit.
3) move unref bo after commit, and add the last delay fence to resv.
This is done by patchset V1.
any ideas?
thanks
xinhui
> 2020年3月14日 02:05,Koenig, Christian <Christian.Koenig@xxxxxxx> 写道:
>
> The page table is not updated and then freed. A higher level PDE is updated and because of this the lower level page tables is freed.
>
> Without this it could be that the memory backing the freed page table is reused while the PDE is still pointing to it.
>
> Rather unlikely that this causes problems, but better save than sorry.
>
> Regards,
> Christian.
>
> Am 13.03.20 um 18:36 schrieb Felix Kuehling:
>> This seems weird. This means that we update a page table, and then free it in the same amdgpu_vm_update_ptes call? That means the update is redundant. Can we eliminate the redundant PTE update if the page table is about to be freed anyway?
>>
>> Regards,
>> Felix
>>
>> On 2020-03-13 12:09, xinhui pan wrote:
>>> Free page table bo before job submit is insane.
>>> We might touch invalid memory while job is runnig.
>>>
>>> we now have individualized bo resv during bo releasing.
>>> So any fences added to root PT bo is actually untested when a normal
>>> PT bo is releasing.
>>>
>>> We might hit gmc page fault or memory just got overwrited.
>>>
>>> Cc: Christian König <christian.koenig@xxxxxxx>
>>> Cc: Alex Deucher <alexander.deucher@xxxxxxx>
>>> Cc: Felix Kuehling <Felix.Kuehling@xxxxxxx>
>>> Signed-off-by: xinhui pan <xinhui.pan@xxxxxxx>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 24 +++++++++++++++++++++---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
>>> 2 files changed, 24 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 73398831196f..346e2f753474 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -937,6 +937,21 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>> return r;
>>> }
>>> +static void amdgpu_vm_free_zombie_bo(struct amdgpu_device *adev,
>>> + struct amdgpu_vm *vm)
>>> +{
>>> + struct amdgpu_vm_pt *entry;
>>> +
>>> + while (!list_empty(&vm->zombies)) {
>>> + entry = list_first_entry(&vm->zombies, struct amdgpu_vm_pt,
>>> + base.vm_status);
>>> + list_del(&entry->base.vm_status);
>>> +
>>> + amdgpu_bo_unref(&entry->base.bo->shadow);
>>> + amdgpu_bo_unref(&entry->base.bo);
>>> + }
>>> +}
>>> +
>>> /**
>>> * amdgpu_vm_free_table - fre one PD/PT
>>> *
>>> @@ -945,10 +960,9 @@ static int amdgpu_vm_alloc_pts(struct amdgpu_device *adev,
>>> static void amdgpu_vm_free_table(struct amdgpu_vm_pt *entry)
>>> {
>>> if (entry->base.bo) {
>>> + list_move(&entry->base.vm_status,
>>> + &entry->base.bo->vm_bo->vm->zombies);
>>> entry->base.bo->vm_bo = NULL;
>>> - list_del(&entry->base.vm_status);
>>> - amdgpu_bo_unref(&entry->base.bo->shadow);
>>> - amdgpu_bo_unref(&entry->base.bo);
>>> }
>>> kvfree(entry->entries);
>>> entry->entries = NULL;
>>> @@ -1624,6 +1638,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
>>> r = vm->update_funcs->commit(¶ms, fence);
>>> error_unlock:
>>> + amdgpu_vm_free_zombie_bo(adev, vm);
>>> amdgpu_vm_eviction_unlock(vm);
>>> return r;
>>> }
>>> @@ -2807,6 +2822,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> INIT_LIST_HEAD(&vm->invalidated);
>>> spin_lock_init(&vm->invalidated_lock);
>>> INIT_LIST_HEAD(&vm->freed);
>>> + INIT_LIST_HEAD(&vm->zombies);
>>> /* create scheduler entities for page table updates */ @@
>>> -3119,6 +3135,8 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>> }
>>> amdgpu_vm_free_pts(adev, vm, NULL);
>>> + amdgpu_vm_free_zombie_bo(adev, vm);
>>> +
>>> amdgpu_bo_unreserve(root);
>>> amdgpu_bo_unref(&root);
>>> WARN_ON(vm->root.base.bo);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index b5705fcfc935..9baf44fa16f0 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -269,6 +269,9 @@ struct amdgpu_vm {
>>> /* BO mappings freed, but not yet updated in the PT */
>>> struct list_head freed;
>>> + /* BO will be freed soon */
>>> + struct list_head zombies;
>>> +
>>> /* contains the page directory */
>>> struct amdgpu_vm_pt root;
>>> struct dma_fence *last_update;
>
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url="">
_______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx