Re: [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 02, 2024 at 07:25:50PM +0000, Zeng, Oak wrote:
> Hi Jason,
> 
> I tried to understand how you supposed us to use hmm range fault... it seems you want us to call hmm range fault two times on each gpu page fault:
 
> 1.
> Call Hmm_range_fault first time, pfn of the fault address is set with HMM_PFN_REQ_FAULT
> Other pfns in the PREFAULT_SIZE range will be set as 0
> Hmm_range fault returns:
> 	Pfn with 0 flag or HMM_PFN_VALID flag, means a valid pfn
> 	Pfn with HMM_PFN_ERROR flag, means invalid pfn
> 
> 2.	
> Then call hmm_range_fault a second time
> Setting the hmm_range start/end only to cover valid pfns
> With all valid pfns, set the REQ_FAULT flag

Why would you do this? The first already did the faults you needed and
returned all the easy pfns that don't require faulting.

> Basically use hmm_range_fault to figure out the valid address range
> in the first round; then really fault (e.g., trigger cpu fault to
> allocate system pages) in the second call the hmm range fault.

You don't fault on prefetch. Prefetch is about mirroring already
populated pages, it should not be causing new faults.

> Do I understand it correctly?

No
 
> This is strange to me. We should already know the valid address
> range before we call hmm range fault, because the migration codes
> need to look up cpu vma anyway. what is the point of the first
> hmm_range fault?

I don't really understand why the GPU driver would drive migration off
of faulting. It doesn't make alot of sense, especially if you are
prefetching CPU pages into the GPU and thus won't get faults for them.

If your plan is to leave the GPU page tables unpopulated and then
migrate on every fault to try to achieve some kind of locality then
you'd want to drive the hmm prefetch on the migration window (so you
don't populate unmigrated pages) and hope for the best.

However, the migration stuff should really not be in the driver
either. That should be core DRM logic to manage that. It is so
convoluted and full of policy that all the drivers should be working
in the same way. 

The GPU fault handler should indicate to some core DRM function that a
GPU memory access occured and get back a prefetch window to pass into
hmm_range_fault. The driver will mirror what the core code tells it.

Jason



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux