This fixes a WARN in i915_gem_free_object when the obj->pages_pin_count isn't 0. v2: Add locking to unmap, noticed by Chris Wilson. Note that even though we call unmap with our own dev->struct_mutex held that won't result in an immediate deadlock since we never go through the dma_buf interfaces for our own, reimported buffers. But it's still easy to blow up and anger lockdep, but that's already the case with our ->map implementation. Fixing this for real will involve per dma-buf ww mutex locking by the callers. And lots of fun. So go with the duct-tape approach for now. Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Reported-by: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxx> Cc: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxx> Tested-by: Armin K. <krejzi@xxxxxxxxx> (v1) Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 63ee1a9..f7e1682 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -85,9 +85,17 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *sg, enum dma_data_direction dir) { + struct drm_i915_gem_object *obj = attachment->dmabuf->priv; + + mutex_lock(&obj->base.dev->struct_mutex); + dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir); sg_free_table(sg); kfree(sg); + + i915_gem_object_unpin_pages(obj); + + mutex_unlock(&obj->base.dev->struct_mutex); } static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf) -- 1.8.3.2 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx