> -----Original Message----- > From: Chris Wilson [mailto:chris@xxxxxxxxxxxxxxxxxx] > Sent: Wednesday, July 15, 2015 4:06 PM > To: Goel, Akash > Cc: Daniel, Thomas; intel-gfx@xxxxxxxxxxxxxxxxxxxxx; Belgaumkar, Vinay; > Winiarski, Michal; Zou, Nanhai > Subject: Re: [PATCH v4] drm/i915: Add soft-pinning API for execbuffer > > On Wed, Jul 15, 2015 at 08:25:23PM +0530, Goel, Akash wrote: > > >>+int > > >>+i915_gem_evict_for_vma(struct i915_vma *target) > > >>+{ > > >>+ struct drm_mm_node *node, *next; > > >>+ > > >>+ list_for_each_entry_safe(node, next, > > >>+ &target->vm->mm.head_node.node_list, > > >>+ node_list) { > > >>+ struct i915_vma *vma; > > >>+ int ret; > > >>+ > > >>+ if (node->start + node->size <= target->node.start) > > >>+ continue; > > >>+ if (node->start >= target->node.start + target->node.size) > > >>+ break; > > >>+ > > >>+ vma = container_of(node, typeof(*vma), node); > > >>+ > > >>+ if (vma->pin_count) { > > >>+ /* We may need to evict a buffer in the same batch */ > > >>+ if (!vma->exec_entry) > > >>+ return -EBUSY; > > >>+ > > >>+ if (vma->exec_entry->flags & EXEC_OBJECT_PINNED) > > >>+ /* Overlapping fixed objects in the same batch > > >>*/ > > >>+ return -EINVAL; > > >>+ > > >>+ return -ENOSPC; > > > > Can we actually hit this condition, considering the soft pinned > > objects are now on the front side of 'eb->vmas' list ? > > If we do encounter such a case, it probably means that the > > overlapping object is already pinned from some other path. > > Note that softpinned objects are only first on the second pass through > the reservation. Eh? I modified i915_gem_execbuffer_reserve() to always put the softpinned vmas first so they should never collide with objects in the same execbuff. > > > Is there a scope of an additional check here ? > > i.e. if (vma->pin_count) is > 1, this indicates that the object is > > not only pinned due to execbuffer, hence cannot be evicted, so > > -EBUSY can be straight away returned to User. > > Consider this instead: > > int > i915_gem_evict_for_vma(struct i915_vma *target, unsigned flags) > { > struct list_head eviction_list; > struct interval_tree_node *it; > u64 end = target->node.start + target->node.size; > struct drm_mm_node *node; > struct i915_vma *vma, *next; > int ret; > > it = interval_tree_iter_first(&target->vm->mm.interval_tree, > target->node.start, end -1); > if (it == NULL) > return 0; > > INIT_LIST_HEAD(&eviction_list); > node = container_of(it, typeof(*node), it); > list_for_each_entry_from(node, > &target->vm->mm.head_node.node_list, > node_list) { > if (node->start >= end) > break; > > vma = container_of(node, typeof(*vma), node); > if (flags & PIN_NONBLOCK && > (vma->pin_count || vma->active.request)) { > ret = -ENOSPC; > break; > } > > if (vma->exec_entry && > vma->exec_entry->flags & EXEC_OBJECT_PINNED) { > /* Overlapping pinned objects in the same batch */ > ret = -EINVAL; > break; > } > > if (vma->pin_count) { > /* We may need to evict an buffer in the same batch */ > ret = vma->exec_entry ? -ENOSPC : -EBUSY; > break; > } > > list_add(&vma->exec_list, &eviction_list); > drm_gem_object_reference(&vma->obj->base); > } > > ret = 0; > list_for_each_entry_safe(vma, next, &eviction_list, exec_list) { > struct drm_i915_gem_object *obj = vma->obj; > if (ret == 0) > ret = i915_vma_unbind(vma); > drm_gem_object_unreference(&obj->base); > } > > return ret; > } > -Chris This contains special stuff which I don't have visibility of (mm.interval_tree, vma.active). Thomas. _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx