On 2021-11-09 6:04 p.m., Philip Yang wrote:
restore pages work can not find kfd process or mm struct if process is
destroyed before drain retry fault work schedule to run, this is not
failure, return 0 to avoid dump GPU vm fault kernel log.
I wonder if this could also be solved by draining page fault interrupts
in kfd_process_notifier_release before we remove the process from the
hash table. Because that function runs while holding the mmap lock, we'd
need to detect the draining condition for the process in the page fault
handler before it tries to take the mmap lock. Maybe that's even a
helpful optimization that speeds up interrupt draining by just ignoring
all retry faults during that time.
That would also allow draining faults in svm_range_unmap_from_cpu
instead of the delayed worker. And I believe that would also elegantly
fix the vma removal race.
Regards,
Felix
Signed-off-by: Philip Yang <Philip.Yang@xxxxxxx>
---
drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 8f77d5746b2c..2083a10b35c2 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -2596,7 +2596,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
p = kfd_lookup_process_by_pasid(pasid);
if (!p) {
pr_debug("kfd process not founded pasid 0x%x\n", pasid);
- return -ESRCH;
+ return 0;
}
if (!p->xnack_enabled) {
pr_debug("XNACK not enabled for pasid 0x%x\n", pasid);
@@ -2610,7 +2610,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
mm = get_task_mm(p->lead_thread);
if (!mm) {
pr_debug("svms 0x%p failed to get mm\n", svms);
- r = -ESRCH;
+ r = 0;
goto out;
}