Am 2021-05-05 um 6:59 p.m. schrieb Alex Sierra: > This reference helps hmm to decide if device pages in the range > require to be migrated back to system memory, based if they are or > not in the same memory domain. > In this case, this reference could come from the same memory domain > with devices connected to the same hive. > > Signed-off-by: Alex Sierra <alex.sierra@xxxxxxx> I'd split this into two patches. One to add the owner parameter to the amdgpu_hmm_range_get_pages function, and one that adds the new logic in svm_range_validate_and_map. See one more comment inline. > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 3 ++- > drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h | 2 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +- > drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 30 +++++++++++++++++++++++-- > 4 files changed, 32 insertions(+), 5 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > index 2741c28ff1b5..378c238c2099 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c > @@ -160,7 +160,7 @@ int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier, > struct mm_struct *mm, struct page **pages, > uint64_t start, uint64_t npages, > struct hmm_range **phmm_range, bool readonly, > - bool mmap_locked) > + bool mmap_locked, void *owner) > { > struct hmm_range *hmm_range; > unsigned long timeout; > @@ -185,6 +185,7 @@ int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier, > hmm_range->hmm_pfns = pfns; > hmm_range->start = start; > hmm_range->end = start + npages * PAGE_SIZE; > + hmm_range->dev_private_owner = owner; > > /* Assuming 512MB takes maxmium 1 second to fault page address */ > timeout = max(npages >> 17, 1ULL) * HMM_RANGE_DEFAULT_TIMEOUT; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h > index 7f7d37a457c3..14a3c1864085 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.h > @@ -34,7 +34,7 @@ int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier, > struct mm_struct *mm, struct page **pages, > uint64_t start, uint64_t npages, > struct hmm_range **phmm_range, bool readonly, > - bool mmap_locked); > + bool mmap_locked, void *owner); > int amdgpu_hmm_range_get_pages_done(struct hmm_range *hmm_range); > > #if defined(CONFIG_HMM_MIRROR) > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > index 7e7d8330d64b..c13f7fbfc070 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > @@ -709,7 +709,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) > readonly = amdgpu_ttm_tt_is_readonly(ttm); > r = amdgpu_hmm_range_get_pages(&bo->notifier, mm, pages, start, > ttm->num_pages, >t->range, readonly, > - false); > + false, NULL); > out_putmm: > mmput(mm); > > diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c > index d9111fea724b..7104762a0119 100644 > --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c > +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c > @@ -1304,6 +1304,16 @@ static void svm_range_unreserve_bos(struct svm_validate_context *ctx) > ttm_eu_backoff_reservation(&ctx->ticket, &ctx->validate_list); > } > > +static void *kfd_svm_page_owner(struct kfd_process *p, int32_t gpuidx) > +{ > + struct kfd_process_device *pdd; > + struct amdgpu_device *adev; > + > + pdd = kfd_process_device_from_gpuidx(p, gpuidx); > + adev = (struct amdgpu_device *)pdd->dev->kgd; > + > + return (adev->hive) ? (void *)adev->hive : (void *)adev; > +} > /* > * Validation+GPU mapping with concurrent invalidation (MMU notifiers) > * > @@ -1334,7 +1344,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm, > { > struct svm_validate_context ctx; > struct hmm_range *hmm_range; > + void *owner; > + struct kfd_process *p; > int r = 0; > + int i = 0; > + int32_t idx; > > ctx.process = container_of(prange->svms, struct kfd_process, svms); > ctx.prange = prange; > @@ -1380,10 +1394,22 @@ static int svm_range_validate_and_map(struct mm_struct *mm, > svm_range_reserve_bos(&ctx); > > if (!prange->actual_loc) { > + > + p = container_of(prange->svms, struct kfd_process, svms); > + owner = kfd_svm_page_owner(p, find_first_bit(ctx.bitmap, > + MAX_GPU_INSTANCE)); > + for_each_set_bit(idx, ctx.bitmap, MAX_GPU_INSTANCE) { > + > + if (kfd_svm_page_owner(p, idx) != owner) { > + owner = NULL; > + break; > + } > + } > + > r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL, > prange->start << PAGE_SHIFT, > prange->npages, &hmm_range, > - false, true); > + false, true, owner); > if (r) { > pr_debug("failed %d to get svm range pages\n", r); > goto unreserve_out; > @@ -2650,7 +2676,7 @@ void svm_range_prefault(struct svm_range *prange, struct mm_struct *mm) > r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL, > prange->start << PAGE_SHIFT, > prange->npages, &hmm_range, > - false, true); > + false, true, NULL); This pre-fault hack is used before migrations to VRAM to make sure all the pages are resident before the migration. With partial migrations, this may migrate pages from VRAM back to system memory before you migrate again to VRAM, so this would lead to a lot of bouncing of pages. I think this svm_range_prefault function also needs an owner parameter (or a gpuidx parameter and figure out the owner from that) so that it doesn't migrate VRAM back to system memory unnecessarily. Eventually we should be able to get rid of this hack, hopefully. But for now, at least let's not make it terrible. ;) Regards, Felix > if (!r) { > amdgpu_hmm_range_get_pages_done(hmm_range); > prange->validated_once = true; _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx