Re: [RFC v2 05/12] drm/i915/svm: Page table mirroring support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 20, 2019 at 01:45:33PM +0000, Jason Gunthorpe wrote:
On Wed, Dec 18, 2019 at 02:41:47PM -0800, Niranjana Vishwanathapura wrote:
> > +static u32 i915_svm_build_sg(struct i915_address_space *vm,
> > +			     struct hmm_range *range,
> > +			     struct sg_table *st)
> > +{
> > +	struct scatterlist *sg;
> > +	u32 sg_page_sizes = 0;
> > +	u64 i, npages;
> > +
> > +	sg = NULL;
> > +	st->nents = 0;
> > +	npages = (range->end - range->start) / PAGE_SIZE;
> > +
> > +	/*
> > +	 * No need to dma map the host pages and later unmap it, as
> > +	 * GPU is not allowed to access it with SVM.
> > +	 * XXX: Need to dma map host pages for integrated graphics while
> > +	 * extending SVM support there.
> > +	 */
> > +	for (i = 0; i < npages; i++) {
> > +		u64 addr = range->pfns[i] & ~((1UL << range->pfn_shift) - 1);
> > +
> > +		if (sg && (addr == (sg_dma_address(sg) + sg->length))) {
> > +			sg->length += PAGE_SIZE;
> > +			sg_dma_len(sg) += PAGE_SIZE;
> > +			continue;
> > +		}
> > +
> > +		if (sg)
> > +			sg_page_sizes |= sg->length;
> > +
> > +		sg =  sg ? __sg_next(sg) : st->sgl;
> > +		sg_dma_address(sg) = addr;
> > +		sg_dma_len(sg) = PAGE_SIZE;
>
> This still can't be like this - assigning pfn to 'dma_address' is
> fundamentally wrong.
>
> Whatever explanation you had, this needs to be fixed up first before we get
> to this patch.
>

The pfn is converted into a device address which goes into sg_dma_address.
Ok, let me think about what else we can do here.

If you combine this with the other function and make it so only
DEVICE_PRIVATE pages get converted toa dma_address with out dma_map,
then that would make sense.


Ok thanks, will do that.

> > +static int
> > +i915_svm_invalidate_range_start(struct mmu_notifier *mn,
> > +				const struct mmu_notifier_range *update)
> > +{
> > +	struct i915_svm *svm = container_of(mn, struct i915_svm, notifier);
> > +	unsigned long length = update->end - update->start;
> > +
> > +	DRM_DEBUG_DRIVER("start 0x%lx length 0x%lx\n", update->start, length);
> > +	if (!mmu_notifier_range_blockable(update))
> > +		return -EAGAIN;
> > +
> > +	i915_gem_vm_unbind_svm_buffer(svm->vm, update->start, length);
> > +	return 0;
> > +}
>
> I still think you should strive for a better design than putting a
> notifier across the entire address space..
>

Yah, thought it could be later optimization.
If I think about it, it has be be a new user API to set the range,
or an intermediate data structure for tracking the bound ranges.
Will look into it.

Well, there are lots of options. Like I said, implicit ODP uses a
level of the device page table to attach the notifier.

There are many performance trade offs here, it depends what works best
for your work load I suppose. But usually the fault path is the fast
thing, so I would think to avoid registering mmu_intervals on it and
accept the higher probability of collisions.


Ok thanks, yah, solution should tune for the performant path. Will look into it.

Thanks,
Niranjana

Jason
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux