Re: [PATCH] drm/i915: Do not unmap object unless no other VMAs reference it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 17, 2014 at 04:45:02PM -0700, Ben Widawsky wrote:
> On Fri, Jul 11, 2014 at 10:20:07AM -0700, armin.c.reese@xxxxxxxxx wrote:
> > From: Armin Reese <armin.c.reese@xxxxxxxxx>
> > 
> > Signed-off-by: Armin Reese <armin.c.reese@xxxxxxxxx>
> > ---
> >  drivers/gpu/drm/i915/i915_gem.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 87d0aac..676e5f4 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2922,8 +2922,6 @@ int i915_vma_unbind(struct i915_vma *vma)
> >  
> >  	vma->unbind_vma(vma);
> >  
> > -	i915_gem_gtt_finish_object(obj);
> > -
> >  	list_del_init(&vma->mm_list);
> >  	/* Avoid an unnecessary call to unbind on rebind. */
> >  	if (i915_is_ggtt(vma->vm))
> > @@ -2934,8 +2932,10 @@ int i915_vma_unbind(struct i915_vma *vma)
> >  
> >  	/* Since the unbound list is global, only move to that list if
> >  	 * no more VMAs exist. */
> > -	if (list_empty(&obj->vma_list))
> > +	if (list_empty(&obj->vma_list)) {
> > +		i915_gem_gtt_finish_object(obj);
> >  		list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
> > +	}
> >  
> >  	/* And finally now the object is completely decoupled from this vma,
> >  	 * we can drop its hold on the backing storage and allow it to be
> 
> 
> 
> This patch is a fix, and therefore needs a commit message to explain the
> bug that you're trying to fix. I will help.
> "
> When using an IOMMU, GEM objects are mapped by their DMA address as the
> physical address is unknown. This depends on the underlying IOMMU
> driver to map and unmap the physical pages properly as defined in
> intel_iommu.c.
> 
> The current code will tell the IOMMU to unmap the GEM BO's pages on the
> destruction of the first VMA that "maps" that BO. This is clearly wrong
> as there may be other VMAs "mapping" that BO (using flink). The scanout
> is one such example.
> 
> The patch fixes this issue by only unmapping the DMA maps when there are
> no more VMAs mapping that object. This is equivalent to when an object
> is considered unbound as can be seen by the code. On the first VMA that
> again because bound, we will remap.
> 
> An alternate solution would be to move the dma mapping to object
> creation and destrubtion. I am not sure if this is considered an
> unfriendly thing to do.
> 
> Some notes to backporters:
> The bug can never be hit without enabling the IOMMU. The existing code
> will also do the right thing when the object is shared via dmabuf. The
> failure should be demonstrable with flink. In cases when not using
> intel_iommu_strict it is likely (likely, as defined by: off the top of
> my head) on current workloads to *not* hit this bug since we often
> teardown all VMAs for an object shared across multiple VMs.  We also
> finish access to that object before the first dma_unmapping.
> intel_iommu_strict with flinked buffers is likely to hit this issue.
> "

Crap. I left out the important part. Note to backporters trying to
backport full PPGTT, which is disabled by default on all kernels to
date.

> 
-- 
Ben Widawsky, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux