On Mon, Oct 17, 2016 at 02:51:01PM +0100, Tvrtko Ursulin wrote: > > On 17/10/2016 12:31, Chris Wilson wrote: > >On Mon, Oct 17, 2016 at 11:55:54AM +0100, Tvrtko Ursulin wrote: > >>On 14/10/2016 13:18, Chris Wilson wrote: > >>> static void > >>>-i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj) > >>>+i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, > >>>+ struct sg_table *pages) > >>> { > >>> struct sgt_iter sgt_iter; > >>> struct page *page; > >>>- int ret; > >>>- GEM_BUG_ON(obj->mm.madv == __I915_MADV_PURGED); > >>>- > >>>- ret = i915_gem_object_set_to_cpu_domain(obj, true); > >>>- if (WARN_ON(ret)) { > >>>- /* In the event of a disaster, abandon all caches and > >>>- * hope for the best. > >>>- */ > >>>- i915_gem_clflush_object(obj, true); > >>>- obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU; > >>>- } > >>>+ __i915_gem_object_release_shmem(obj); > >>Waiting for the object to become inactive is now gone. It did not > >>spot that that changed with this patch. Did it? > >There's no wait here. We have BUG_ONs today (that remain in place) > >to catch trying to free pages that are in use by the GPU. This is just a > >change of domains (and I wanted to keep the set-to-cpu-domain asserts, > >and avoid the other side effects). > > Oh right, makes sense. > > >>> int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) > >>> { > >>>- struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > >>>- const struct drm_i915_gem_object_ops *ops = obj->ops; > >>>- int ret; > >>>+ int err; > >>> lockdep_assert_held(&obj->base.dev->struct_mutex); > >>> if (obj->mm.pages) > >>> return 0; > >>>- if (obj->mm.madv != I915_MADV_WILLNEED) { > >>>- DRM_DEBUG("Attempting to obtain a purgeable object\n"); > >>>- __i915_gem_object_unpin_pages(obj); > >>>- return -EFAULT; > >>>- } > >>>- > >>>- ret = ops->get_pages(obj); > >>>- if (ret) { > >>>+ err = ____i915_gem_object_get_pages(obj); > >>>+ if (err) > >>> __i915_gem_object_unpin_pages(obj); > >>>- return ret; > >>>- } > >>>- list_add_tail(&obj->global_list, &dev_priv->mm.unbound_list); > >>Objects will no longer appear on the unbound list when they have > >>backing store? > >They still do. We don't hold the struct_mutex here, so we defer the > >global list tracking to the domain management which is slightly later. > > Later as when VMA gets pinned?... On unbind, and whilst unbound upon use by the user. > >However, there is still a window of opportunity where we have pages > >pinned for our use but not yet visible to the shrinker. A bit of > >rejigging should be mean we only need to move the unbound upon > >pages_pin_count==0 and it will require a spinlock, but that make > >actually work out to be less code. But it won't yet reduce the > >struct_mutex hold time in the shrinker, so I'm not seeing the upside > >yet. > > ... in which case userspace just creating objects and not touching > them avoids putting them on bound/unbound lists? We don't allocate pages until the user uses them and that goes through the domain tracking (should). Yes, there is a gap there and to solve that we need to manage the mm.unbound_list using a spinlock rather than a mutex. Just trying to decide if it is worth closing the hole now, or later when we have a better idea of how to remove struct_mutex from the shrinker. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx