On Fri, 12 Feb 2021 at 10:22, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > We already wrap i915_vma.node.start for use with the GGTT, as there we > can perform additional sanity checks that the node belongs to the GGTT > and fits within the 32b registers. In the next couple of patches, we > will introduce guard pages around the objects _inside_ the drm_mm_node > allocation. That is we will offset the vma->pages so that the first page > is at drm_mm_node.start + vma->guard (not 0 as is currently the case). > All users must then not use i915_vma.node.start directly, but compute > the guard offset, thus all users are converted to use a > i915_vma_offset() wrapper. > > The notable exceptions are the selftests that are testing exact > behaviour of i915_vma_pin/i915_vma_insert. > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > --- <snip> > @@ -562,10 +562,11 @@ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma) > GEM_BUG_ON(!i915_vma_is_ggtt(vma)); > GEM_BUG_ON(!vma->fence_size); > > - fenceable = (vma->node.size >= vma->fence_size && > - IS_ALIGNED(vma->node.start, vma->fence_alignment)); > + fenceable = (i915_vma_size(vma) >= vma->fence_size && > + IS_ALIGNED(i915_vma_offset(vma), vma->fence_alignment)); > > - mappable = vma->node.start + vma->fence_size <= i915_vm_to_ggtt(vma->vm)->mappable_end; > + mappable = (i915_vma_offset(vma) + vma->fence_size <= > + i915_vm_to_ggtt(vma->vm)->mappable_end); i915_ggtt_offset(vma) could be used here. Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx> _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx