Re: [PATCH 32/46] drm/i915: Pull VM lists under the VM mutex.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2019-01-16 16:47:43)
> 
> On 07/01/2019 11:54, Chris Wilson wrote:
> > @@ -1530,20 +1531,21 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
> >   
> >   static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
> >   {
> > -     struct drm_i915_private *i915;
> > +     struct drm_i915_private *i915 = to_i915(obj->base.dev);
> >       struct list_head *list;
> >       struct i915_vma *vma;
> >   
> >       GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
> >   
> > +     mutex_lock(&i915->ggtt.vm.mutex);
> >       for_each_ggtt_vma(vma, obj) {
> >               if (!drm_mm_node_allocated(&vma->node))
> >                       continue;
> >   
> >               list_move_tail(&vma->vm_link, &vma->vm->bound_list);
> >       }
> > +     mutex_unlock(&i915->ggtt.vm.mutex);
> 
> This is now struct_mutex -> vm->mutex nesting, which we would preferably 
> want to avoid? There only two callers for the function.
> 
> It looks we could remove nesting from i915_gem_set_domain_ioctl by just 
> moving the call to after mutex unlock.
> 
> i915_gem_object_unpin_from_display_plane callers are not as easy so 
> maybe at least do the one above?

unpin_from_display_plane is the goal here tbh.

> > -     i915 = to_i915(obj->base.dev);
> >       spin_lock(&i915->mm.obj_lock);
> >       list = obj->bind_count ? &i915->mm.bound_list : &i915->mm.unbound_list;
> >       list_move_tail(&obj->mm.link, list);
> > diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
> > index a76f65fe86be..4a0c6830659d 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_evict.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_evict.c
> > @@ -433,6 +433,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm)
> >       }
> >   
> >       INIT_LIST_HEAD(&eviction_list);
> > +     mutex_lock(&vm->mutex);
> >       list_for_each_entry(vma, &vm->bound_list, vm_link) {
> >               if (i915_vma_is_pinned(vma))
> >                       continue;
> > @@ -440,6 +441,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm)
> >               __i915_vma_pin(vma);
> >               list_add(&vma->evict_link, &eviction_list);
> >       }
> > +     mutex_unlock(&vm->mutex);
> 
> This is another nesting so I suppose you leave all this fun for later?

Yes, the intent was to put the locks in place (gradually) then peel back
struct_mutex (gradually).

> > @@ -689,8 +694,10 @@ i915_vma_remove(struct i915_vma *vma)
> >   
> >       vma->ops->clear_pages(vma);
> >   
> > +     mutex_lock(&vma->vm->mutex);
> >       drm_mm_remove_node(&vma->node);
> 
> This is by design also protected by the vm->mutex? But insertion is not 
> AFAICT.

Not yet. Can you guess which bit proved tricky? ;) Getting the right
point to lock for execbuf, and eviction. At the same time over there is
the fuss with ww_mutex, as well as contexts et al, and it all gets
confusing quickly.

...(tries to remember why this patch is actually here; this set was
picked so that I could do obj->vma_list without struct_mutex (which
was used for timeline allocation) and I pulled in anything required to
resolve conflicts, but why this one)...
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux