On Mon, Dec 19, 2016 at 12:43:45PM +0000, Chris Wilson wrote: > If we at first do not succeed with attempting to remap our physical > pages using a coalesced scattergather list, try again with one > scattergather entry per page. This should help with swiotlb as it uses a > limited buffer size and only searches for contiguous chunks within its > buffer aligned up to the next boundary - i.e. we may prematurely cause a > failure as we are unable to utilize the unused space between large > chunks and trigger an error such as: > > i915 0000:00:02.0: swiotlb buffer is full (sz: 1630208 bytes) > > Reported-by: Juergen Gross <jgross@xxxxxxxx> > Fixes: 871dfbd67d4e ("drm/i915: Allow compaction upto SWIOTLB max segment size") > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > Cc: Imre Deak <imre.deak@xxxxxxxxx> > Cc: <drm-intel-fixes@xxxxxxxxxxxxxxxxxxxxx> Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> Feels a bit funny to call swiotlb_* functions, I'd kinda assume that we could somehow figure this out from the dma limits instead of leaking through the dma api abstraction. But that's already there, so meh. -Daniel > --- > drivers/gpu/drm/i915/i915_gem.c | 26 ++++++++++++++++++++++---- > 1 file changed, 22 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > index 412f3513f269..4e263df2afc3 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -2326,7 +2326,8 @@ static struct sg_table * > i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) > { > struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > - int page_count, i; > + const unsigned long page_count = obj->base.size / PAGE_SIZE; > + unsigned long i; > struct address_space *mapping; > struct sg_table *st; > struct scatterlist *sg; > @@ -2352,7 +2353,7 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) > if (st == NULL) > return ERR_PTR(-ENOMEM); > > - page_count = obj->base.size / PAGE_SIZE; > +rebuild_st: > if (sg_alloc_table(st, page_count, GFP_KERNEL)) { > kfree(st); > return ERR_PTR(-ENOMEM); > @@ -2411,8 +2412,25 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) > i915_sg_trim(st); > > ret = i915_gem_gtt_prepare_pages(obj, st); > - if (ret) > - goto err_pages; > + if (ret) { > + /* DMA remapping failed? One possible cause is that > + * it could not reserve enough large entries, asking > + * for PAGE_SIZE chunks instead may be helpful. > + */ > + if (max_segment > PAGE_SIZE) { > + for_each_sgt_page(page, sgt_iter, st) > + put_page(page); > + sg_free_table(st); > + > + max_segment = PAGE_SIZE; > + goto rebuild_st; > + } else { > + dev_warn(&dev_priv->drm.pdev->dev, > + "Failed to DMA remap %lu pages\n", > + page_count); > + goto err_pages; > + } > + } > > if (i915_gem_object_needs_bit17_swizzle(obj)) > i915_gem_object_do_bit_17_swizzle(obj, st); > -- > 2.11.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx