On 02/02/2017 13:27, Chris Wilson wrote:
If we fail to dma-map the object, the most common cause is lack of space inside the SW-IOTLB due to fragmentation. If we recreate the_sg_table using segments of PAGE_SIZE (and single page allocations), we may succeed in remapping the scatterlist. First became a significant problem for the mock selftests after commit 5584f1b1d73e ("drm/i915: fix i915 running as dom0 under Xen") increased the max_order.
There is still "max_order = min(max_order, ...)" in that patch, so I don't see how it increased it.
I think you mean "drm/i915: Allow compaction upto SWIOTLB max segment size" ? Or no, that predates the internal object support. So just fixes on the patch which introduced the internal objects?
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_internal.c | 37 ++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_internal.c b/drivers/gpu/drm/i915/i915_gem_internal.c index 2b9d5e94a8ae..fc950abbe400 100644 --- a/drivers/gpu/drm/i915/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/i915_gem_internal.c @@ -48,24 +48,12 @@ static struct sg_table * i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); - unsigned int npages = obj->base.size / PAGE_SIZE; struct sg_table *st; struct scatterlist *sg; + unsigned int npages; int max_order; gfp_t gfp; - st = kmalloc(sizeof(*st), GFP_KERNEL); - if (!st) - return ERR_PTR(-ENOMEM); - - if (sg_alloc_table(st, npages, GFP_KERNEL)) { - kfree(st); - return ERR_PTR(-ENOMEM); - } - - sg = st->sgl; - st->nents = 0; - max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { @@ -87,6 +75,20 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) gfp |= __GFP_DMA32; } +create_st: + st = kmalloc(sizeof(*st), GFP_KERNEL); + if (!st) + return ERR_PTR(-ENOMEM); + + npages = obj->base.size / PAGE_SIZE; + if (sg_alloc_table(st, npages, GFP_KERNEL)) { + kfree(st); + return ERR_PTR(-ENOMEM); + } + + sg = st->sgl; + st->nents = 0; + do { int order = min(fls(npages) - 1, max_order); struct page *page; @@ -114,8 +116,15 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) sg = __sg_next(sg); } while (1); - if (i915_gem_gtt_prepare_pages(obj, st)) + if (i915_gem_gtt_prepare_pages(obj, st)) { + /* Failed to dma-map try again with single page sg segments */ + if (get_order(st->sgl->length)) { + internal_free_pages(st); + max_order = 0; + goto create_st; + } goto err; + } /* Mark the pages as dontneed whilst they are still pinned. As soon * as they are unpinned they are allowed to be reaped by the shrinker,
Looks correct, just the correct victim needs to be identified in the commit and fixes tag. With that fixed:
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Regards, Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx