Re: [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 24, 2024 at 02:31:36AM +0000, Matthew Brost wrote:

> AMD seems to register notifiers on demand for parts of the address space
> [1], I think Nvidia's open source driver does this too (can look this up
> if needed). We (Intel) also do this in Xe and the i915 for userptrs
> (explictly binding a user address via IOCTL) too and it seems to work
> quite well.

I always thought AMD's implementation of this stuff was bad..

> > > > > This is not what I'm talking about. The GPU VMA is bound to a specific
> > > > > MM VA, it should not be created on demand.
> > > >
> > > > Today we have two places where we create gpu vma: 1) create gpu vma
> > > > during a vm_bind ioctl 2) create gpu vma during a page fault of the
> > > > system allocator range (this will be in v2 of this series).
> > > 
> > > Don't do 2.
> 
> You have to create something, actually 2 things, on a GPU page fault.
> Something to track the page table state and something to track VRAM
> memory allocation. Both AMD and Nvidia's open source driver do this.

VRAM memory allocation should be tracked by the mm side, under the
covers of hmm_range_fault (or migration prior to invoke
hmm_range_fault).

VRAM memory allocation or management has nothing to do with SVA.

>From there the only need is to copy hmm_range_fault results into GPU
PTEs. You definately do not *need* some other data structure.

> > > > The reason is, we still need some gpu corresponding structure to
> > > > match the cpu vm_area_struct.
> > > 
> > > Definately not.
> > 
> > See explanation above.
> 
> Agree GPU doesn't need to match vm_area_struct but the allocation must
> be subset (or equal) to a vm_area_struct. Again other driver do this
> too.

No, absolutely not. There can be no linking of CPU vma_area_struct to
how a driver operates hmm_range_fault().

You probably need to do something like this for your migration logic,
but that is seperate.

> > > You call hmm_range_fault() and it does everything for you. A driver
> > > should never touch CPU VMAs and must not be aware of them in any away.
> 
> struct vm_area_struct is an argument to the migrate_vma* functions [4], so
> yes drivers need to be aware of CPU VMAs.

That is something else. If you want to mess with migration during your
GPU fault path then fine that is some "migration module", but it
should have NOTHING to do with how hmm_range_fault() is invoked or how
the *SVA* flow operates.

You are mixing things up here, this thread is talking about
hmm_range_fault() and SVA.

migration is something that happens before doing the SVA mirroring
flows.

Jason



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux