On 05/01/2024 18:46, Dmitry Osipenko wrote: > From: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> > > If some the pages or sgt allocation failed, we shouldn't release the > pages ref we got earlier, otherwise we will end up with unbalanced > get/put_pages() calls. We should instead leave everything in place > and let the BO release function deal with extra cleanup when the object > is destroyed, or let the fault handler try again next time it's called. > > Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations") > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> > Co-developed-by: Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> Reviewed-by: Steven Price <steven.price@xxxxxxx> > --- > drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +++++++++---- > 1 file changed, 9 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index bd5a0073009d..4a0b4bf03f1a 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -502,11 +502,18 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, > mapping_set_unevictable(mapping); > > for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { > + /* Can happen if the last fault only partially filled this > + * section of the pages array before failing. In that case > + * we skip already filled pages. > + */ > + if (pages[i]) > + continue; > + > pages[i] = shmem_read_mapping_page(mapping, i); > if (IS_ERR(pages[i])) { > ret = PTR_ERR(pages[i]); > pages[i] = NULL; > - goto err_pages; > + goto err_unlock; > } > } > > @@ -514,7 +521,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, > ret = sg_alloc_table_from_pages(sgt, pages + page_offset, > NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); > if (ret) > - goto err_pages; > + goto err_unlock; > > ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); > if (ret) > @@ -537,8 +544,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, > > err_map: > sg_free_table(sgt); > -err_pages: > - drm_gem_shmem_put_pages_locked(&bo->base); > err_unlock: > dma_resv_unlock(obj->resv); > err_bo: