On Thu, 23 Nov 2023 15:24:32 +0300 Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> wrote: > On 11/23/23 12:05, Boris Brezillon wrote: > > On Thu, 23 Nov 2023 01:04:56 +0300 > > Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> wrote: > > > >> On 11/10/23 13:53, Boris Brezillon wrote: > >>> Hm, there was no drm_gem_shmem_get_pages_sgt() call here, why > >>> should we add a drm_gem_shmem_get_pages()? What we should do > >>> instead is add a drm_gem_shmem_get_pages() for each > >>> drm_gem_shmem_get_pages_sgt() we have in the driver (in > >>> panfrost_mmu_map()), and add drm_gem_shmem_put_pages() calls > >>> where they are missing (panfrost_mmu_unmap()). > >>> > >>>> + if (err) > >>>> + goto err_free; > >>>> + } > >>>> + > >>>> return bo; > >>>> + > >>>> +err_free: > >>>> + drm_gem_shmem_free(&bo->base); > >>>> + > >>>> + return ERR_PTR(err); > >>>> } > >>>> > >>>> struct drm_gem_object * > >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c > >>>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c index > >>>> 770dab1942c2..ac145a98377b 100644 --- > >>>> a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ > >>>> b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -504,7 +504,7 @@ > >>>> static int panfrost_mmu_map_fault_addr(struct panfrost_device > >>>> *pfdev, int as, if (IS_ERR(pages[i])) { ret = PTR_ERR(pages[i]); > >>>> pages[i] = NULL; > >>>> - goto err_pages; > >>>> + goto err_unlock; > >>>> } > >>>> } > >>>> > >>>> @@ -512,7 +512,7 @@ static int > >>>> panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int > >>>> as, ret = sg_alloc_table_from_pages(sgt, pages + page_offset, > >>>> NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); if (ret) > >>>> - goto err_pages; > >>>> + goto err_unlock; > >>> Feels like the panfrost_gem_mapping object should hold a ref on > >>> the BO pages, not the BO itself, because, ultimately, the user of > >>> the BO is the GPU. This matches what I was saying about moving > >>> get/put_pages() to panfrost_mmu_map/unmap(): everytime a > >>> panfrost_gem_mapping becomes active, to want to take a pages ref, > >>> every time it becomes inactive, you should release the pages ref. > >>> > >> > >> The panfrost_mmu_unmap() is also used by shrinker when BO is > >> purged. I'm unhappy with how icky it all becomes if unmap is made > >> to put pages. > > > > Why, that's exactly what's supposed to happen. If you mmu_unmap(), > > that means you no longer need the pages ref you got. > > The drm_gem_shmem_purge() frees the pages. If mmu_unmap() frees pages > too, then it becomes odd for drm_gem_shmem_purge() that it needs to > free pages that were already freed. Hm, I didn't consider the mmu_unmap() call in the eviction path. > > >> Previously map() was implicitly allocating pages with get_sgt() > >> and then pages were implicitly released by drm_gem_shmem_free(). A > >> non-heap BO is mapped when it's created by Panfrost, hence the > >> actual lifetime of pages is kept unchanged by this patch. > > > > But the whole point of making it explicit is to control when pages > > are needed or not, isn't it. The fact we mmu_map() the BO at open > > time, and keep it mapped until it's not longer referenced is an > > implementation choice, and I don't think having pages_put() in > > mmu_unmap() changes that. > > Previously, when the last mmu_unmap() was done, the pages were not > released. > > If you'll make unmap to put pages, then you can't map BO again > because pages are released by the last put() of unmap. Well, you could, if panfrost_gem_mapping_get() was not only returning an existing mapping, but was also creating one when none exist. But you're right, it messes up with the shmem shrinker and also changes the way we are doing things now. > In order to > keep the old pages allocation logic unchanged, the pages must be > referenced while BO is alive, not while mapping is alive. Correct. > > Technically, the code can be changed to put pages on unmap. This > requires adding special quirk to drm_gem_shmem_purge() and then for > Panfrost pages should have the same lifetime as BO, hence why bother? No, we certainly don't want to do that.