On Mon, Nov 25, 2019 at 01:24:18PM +0000, Jason Gunthorpe wrote:
On Sun, Nov 24, 2019 at 01:12:47PM -0800, Niranjan Vishwanathapura wrote:
> > > Using a temporary range is the pattern from nouveau, is it really
> > > necessary in this driver?
> >
> > Yah, not required. In my local build I tried with proper default_flags
> > and set pfn_flags_mask to 0 and it is working fine.
>
> Sorry, I ment calling hmm_range_register during fault processing.
>
> If your driver works around user space objects that cover a VA then
> the range should be created when the object is created.
>
Oh ok. No, there is no user space object here.
Binding the user space object to device page table is handled in
patch 03 of this series (no HMM there).
This is for binding a system allocated (malloc) memory. User calls
the bind ioctl with the VA range.
> > > > + /*
> > > > + * No needd to dma map the host pages and later unmap it, as
> > > > + * GPU is not allowed to access it with SVM. Hence, no need
> > > > + * of any intermediate data strucutre to hold the mappings.
> > > > + */
> > > > + for (i = 0; i < npages; i++) {
> > > > + u64 addr = range->pfns[i] & ~((1UL << range->pfn_shift) - 1);
> > > > +
> > > > + if (sg && (addr == (sg_dma_address(sg) + sg->length))) {
> > > > + sg->length += PAGE_SIZE;
> > > > + sg_dma_len(sg) += PAGE_SIZE;
> > > > + continue;
> > > > + }
> > > > +
> > > > + if (sg)
> > > > + sg_page_sizes |= sg->length;
> > > > +
> > > > + sg = sg ? __sg_next(sg) : st->sgl;
> > > > + sg_dma_address(sg) = addr;
> > > > + sg_dma_len(sg) = PAGE_SIZE;
> > > > + sg->length = PAGE_SIZE;
> > > > + st->nents++;
> > >
> > > It is odd to build the range into a sgl.
> > >
> > > IMHO it is not a good idea to use the sg_dma_address like this, that
> > > should only be filled in by a dma map. Where does it end up being
> > > used?
> >
> > The sgl is used to plug into the page table update function in i915.
> >
> > For the device memory in discrete card, we don't need dma map which
> > is the case here.
>
> How did we get to device memory on a card? Isn't range->pfns a CPU PFN
> at this point?
>
> I'm confused.
Device memory plugin is done through devm_memremap_pages() in patch 07 of
this series. In that patch, we convert the CPU PFN to device PFN before
building the sgl (this is similar to the nouveau driver).
But earlier just called hmm_range_fault(), it can return all kinds of
pages. If these are only allowed to be device pages here then that
must be checked (under lock)
Yah, let me add the check.
(I kept is unchecked for debug purpose, forgot to add the check before sending
the patches.)
And putting the cpu PFN of a ZONE_DEVICE device page into
sg_dma_address still looks very wrong to me
The below call in patch 7 does convert any cpu PFN to device address.
So, it won't be CPU PFN.
i915_dmem_convert_pfn(vm->i915, &range);
Also, only reason to use sgl list is because i915 driver page table update
functions takes an sgl, otherwise we can directly deal with range.pfns array.
Niranjana
Jason
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx