On Fri, Dec 13, 2019 at 5:24 PM Niranjan Vishwanathapura <niranjana.vishwanathapura@xxxxxxxxx> wrote:
On Fri, Dec 13, 2019 at 04:58:42PM -0600, Jason Ekstrand wrote:
>
> +/**
> + * struct drm_i915_gem_vm_bind
> + *
> + * Bind an object in a vm's page table.
>
> First off, this is something I've wanted for a while for Vulkan, it's just
> never made its way high enough up the priority list. However, it's going
> to have to come one way or another soon. I'm glad to see kernel API for
> this being proposed.
> I do, however, have a few high-level comments/questions about the API:
> 1. In order to be useful for sparse memory support, the API has to go the
> other way around so that it binds a VA range to a range within the BO. It
> also needs to be able to handle overlapping where two different VA ranges
> may map to the same underlying bytes in the BO. This likely means that
> unbind needs to also take a VA range and only unbind that range.
> 2. If this is going to be useful for managing GL's address space where we
> have lots of BOs, we probably want it to take a list of ranges so we
> aren't making one ioctl for each thing we want to bind.
Hi Jason,
Yah, some of these requirements came up.
Yes, I have raised them every single time an API like this has come across my e-mail inbox for years and they continue to get ignored. Why are we landing an API that we know isn't the API we want especially when it's pretty obvious roughly what the API we want is? It may be less time in the short term, but long-term it means two ioctls and two implementations in i915, IGT tests for both code paths, and code in all UMDs to call one or the other depending on what kernel you're running on, and we have to maintain all that code going forward forever. Sure, that's a price we pay today for a variety of things but that's because they all seemed like the right thing at the time. Landing the wrong API when we know it's the wrong API seems foolish.
They are not being done here due to time and effort involved in defining
those requirements, implementing and validating.
For #1, yes, it would require more effort but for #2, it really doesn't take any extra effort to make it take an array...
However, this ioctl can be extended in a backward compatible way to handle
those requirements if required.
> 3. Why are there no ways to synchronize this with anything? For binding,
> this probably isn't really needed as long as the VA range you're binding
> is empty. However, if you want to move bindings around or unbind
> something, the only option is to block in userspace and then call
> bind/unbind. This can be done but it means even more threads in the UMD
> which is unpleasant. One could argue that that's more or less what the
> kernel is going to have to do so we may as well do it in userspace.
> However, I'm not 100% convinced that's true.
> --Jason
>
Yah, that is the thought.
But as SVM feature evolves, I think we can consider handling some such cases
if hadling those in driver does make whole lot sense.
Sparse binding exists as a feature. It's been in D3D for some time and it's in Vulkan. We pretty much know what the requirements are. If you go look at how it's supposed to work in Vulkan, you have a binding queue and it waits on semaphores before [un]binding and signals semaphores after [un]binding. The biggest problem from an API (as opposed to implementation) POV with doing that in i915 is that we have too many synchronization primitives to choose from. :-(
--Jason
Thanks,
Niranjana
>
> + */
> +struct drm_i915_gem_vm_bind {
> + /** VA start to bind **/
> + __u64 start;
> +
> + /** Type of memory to [un]bind **/
> + __u32 type;
> +#define I915_GEM_VM_BIND_SVM_OBJ 0
> +
> + /** Object handle to [un]bind for I915_GEM_VM_BIND_SVM_OBJ type
> **/
> + __u32 handle;
> +
> + /** vm to [un]bind **/
> + __u32 vm_id;
> +
> + /** Flags **/
> + __u32 flags;
> +#define I915_GEM_VM_BIND_UNBIND (1 << 0)
> +#define I915_GEM_VM_BIND_READONLY (1 << 1)
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
> --
> 2.21.0.rc0.32.g243a4c7e27
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx