On Thu, Sep 19, 2013 at 09:06:39PM -0700, Ben Widawsky wrote: > @@ -1117,8 +1114,25 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, > * batch" bit. Hence we need to pin secure batches into the global gtt. > * hsw should have this fixed, but let's be paranoid and do it > * unconditionally for now. */ > - if (flags & I915_DISPATCH_SECURE && !batch_obj->has_global_gtt_mapping) > - i915_gem_gtt_bind_object(batch_obj, batch_obj->cache_level); > + if (flags & I915_DISPATCH_SECURE) { > + struct i915_address_space *ggtt = obj_to_ggtt(batch_obj); > + /* Assuming all privileged batches are in the global GTT means > + * we need to make sure we have a global gtt offset, as well as > + * the PTEs mapped. As mentioned above, we can forego this on > + * HSW, but don't. > + */ > + ret = i915_gem_object_bind_to_vm(batch_obj, ggtt, 0, false, > + false); > + if (ret) > + goto err; bind_to_vm() has unwanted side-effects here - notably always allocating a node and corrupting lists. Just pin, ggtt->bind_vma, unpin. Hmmm, except that we also need a move_to_active (as we are not presuming vm == ggtt). pin, ggtt->bind_vma, move_to_active(ggtt), unpin. And then hope we have the correct flushes in place for that to be retired if nothing else is going on with that ggtt. > + > + ggtt->bind_vma(i915_gem_obj_to_ggtt(batch_obj), > + batch_obj->cache_level, > + GLOBAL_BIND); > + > + exec_start += i915_gem_obj_ggtt_offset(batch_obj); > + } else > + exec_start += i915_gem_obj_offset(batch_obj, vm); > > ret = i915_gem_execbuffer_move_to_gpu(ring, &eb->vmas); > if (ret) -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx