On Mon, Nov 25, 2019 at 11:33:27AM -0500, Jerome Glisse wrote:
On Fri, Nov 22, 2019 at 11:33:12PM +0000, Jason Gunthorpe wrote:
On Fri, Nov 22, 2019 at 12:57:27PM -0800, Niranjana Vishwanathapura wrote:
[...]
> +static int
> +i915_range_fault(struct i915_svm *svm, struct hmm_range *range)
> +{
> + long ret;
> +
> + range->default_flags = 0;
> + range->pfn_flags_mask = -1UL;
> +
> + ret = hmm_range_register(range, &svm->mirror);
> + if (ret) {
> + up_read(&svm->mm->mmap_sem);
> + return (int)ret;
> + }
Using a temporary range is the pattern from nouveau, is it really
necessary in this driver?
Just to comment on this, for GPU the usage model is not application
register range of virtual address it wants to use. It is GPU can
access _any_ CPU valid address just like the CPU would (modulo mmap
of device file).
This is because the API you want in userspace is application passing
random pointer to the GPU and GPU being able to chase down any kind
of random pointer chain (assuming all valid ie pointing to valid
virtual address for the process).
This is unlike the RDMA case.
That being said, for best performance we still expect well behaving
application to provide hint to kernel so that we know if a range of
virtual address is likely to be use by the GPU or not. But this is
not, and should not be a requirement.
I posted patchset and given talks about this, but long term i believe
we want a common API to manage hint provided by userspace (see my
talk at LPC this year about new syscall to bind memory to device).
With such thing in place we could hang mmu notifier range to it. But
the driver will still need to be able to handle the case where there
is no hint provided by userspace and thus no before knowledge of what
VA might be accessed.
Thanks Jerome for the explanation. Will checkout your LPC talk.
Yes I agree. When GPU faulting support is available, driver will handle the fault,
migrate page if needed and bind the page using HMM.
This patch series adds support for prefetch and bind hints (via explicit ioctls).
Also, patch 12 of the series provides the ability to enable/disable SVM on a per
VM basis for user, and by default SVM is disabled.
Niranjana
Cheers,
Jérôme
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel