Quoting Chris Wilson (2018-06-05 10:19:42) > To allow for future non-object backed vma, we need to be able to > specialise the callbacks for binding, et al, the vma. For example, > instead of calling vma->vm->bind_vma(), we now call > vma->ops->bind_vma(). This gives us the opportunity to later override the > operation for a custom vma. > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> > Cc: Matthew Auld <matthew.william.auld@xxxxxxxxx> <SNIP> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h > @@ -58,6 +58,7 @@ > > struct drm_i915_file_private; > struct drm_i915_fence_reg; > +struct i915_vma; > > typedef u32 gen6_pte_t; > typedef u64 gen8_pte_t; > @@ -254,6 +255,20 @@ struct i915_pml4 { > struct i915_page_directory_pointer *pdps[GEN8_PML4ES_PER_PML4]; > }; > > +struct i915_vma_ops { > + /* > + * Unmap an object from an address space. This usually consists of > + * setting the valid PTE entries to a reserved scratch page. > + */ > + void (*unbind_vma)(struct i915_vma *vma); > + /* Map an object into an address space with the given cache flags. */ > + int (*bind_vma)(struct i915_vma *vma, > + enum i915_cache_level cache_level, > + u32 flags); While here, you could swap the bind/unbind to be in the logical order which they are also initialized in. Reviewed-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Regards, Joonas > + int (*set_pages)(struct i915_vma *vma); > + void (*clear_pages)(struct i915_vma *vma); > +}; > + _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx