Re: [RFC 1/1] drm/i915/dgfx: Avoid parent bridge rpm on mmap mappings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Vivi, Rodrigo <rodrigo.vivi@xxxxxxxxx>
> Sent: Tuesday, August 9, 2022 8:36 PM
> To: Gupta, Anshuman <anshuman.gupta@xxxxxxxxx>
> Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx; daniel@xxxxxxxx; Wilson, Chris P
> <chris.p.wilson@xxxxxxxxx>; dri-devel@xxxxxxxxxxxxxxxxxxxxx
> Subject: Re:  [RFC 1/1] drm/i915/dgfx: Avoid parent bridge rpm on
> mmap mappings
> 
> On Mon, Aug 08, 2022 at 04:05:55PM +0530, Anshuman Gupta wrote:
> > As per PCIe Spec Section 5.3,
> > When a Type 1 Function associated with a Switch/Root Port (a “virtual
> > bridge”) is in a non-D0 power state, it will emulate the behavior of a
> > conventional PCI bridge in its handling of Memory, I/O, and
> > Configuration Requests and Completions. All Memory and I/O requests
> > flowing Downstream are terminated as Unsupported Requests.
> >
> > Due to above limitations when Intel DGFX Cards graphics PCI func's
> > upstream bridge(referred as VSP) enters to D3, all mmap memory mapping
> > associated with lmem objects reads 0xff, therefore avoiding VSP
> > runtime suspend accordingly.
> >
> > This will make sure that user can read valid data from lmem, while
> > DGFX Card graphics PCI func is in D3 state.
> >
> > Signed-off-by: Anshuman Gupta <anshuman.gupta@xxxxxxxxx>
> > ---
> >  drivers/gpu/drm/i915/gem/i915_gem_mman.c | 11 ++++++++
> > drivers/gpu/drm/i915/gem/i915_gem_ttm.c  |  8 ++++++
> > drivers/gpu/drm/i915/intel_runtime_pm.c  | 35 ++++++++++++++++++++++++
> > drivers/gpu/drm/i915/intel_runtime_pm.h  |  2 ++
> >  4 files changed, 56 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> > b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> > index 0c5c43852e24..968bed5b56d3 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> > @@ -845,6 +845,10 @@ static void vm_open(struct vm_area_struct *vma)
> >  	struct drm_i915_gem_object *obj = mmo->obj;
> >
> >  	GEM_BUG_ON(!obj);
> > +
> > +	if (i915_gem_object_is_lmem(obj))
> > +		intel_runtime_pm_get_vsp(to_i915(obj->base.dev));
> > +
> >  	i915_gem_object_get(obj);
> >  }
> >
> > @@ -854,6 +858,10 @@ static void vm_close(struct vm_area_struct *vma)
> >  	struct drm_i915_gem_object *obj = mmo->obj;
> >
> >  	GEM_BUG_ON(!obj);
> > +
> > +	if (i915_gem_object_is_lmem(obj))
> > +		intel_runtime_pm_put_vsp(to_i915(obj->base.dev));
> > +
> >  	i915_gem_object_put(obj);
> >  }
> >
> > @@ -972,6 +980,9 @@ int i915_gem_mmap(struct file *filp, struct
> vm_area_struct *vma)
> >  		return PTR_ERR(anon);
> >  	}
> >
> > +	if (i915_gem_object_is_lmem(obj))
> > +		intel_runtime_pm_get_vsp(to_i915(obj->base.dev));
> > +
> >  	vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND |
> VM_DONTDUMP | VM_IO;
> >
> >  	/*
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > index 5a5cf332d8a5..bcacd95fdbc1 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > @@ -1101,6 +1101,10 @@ static void ttm_vm_open(struct vm_area_struct
> *vma)
> >  		i915_ttm_to_gem(vma->vm_private_data);
> >
> >  	GEM_BUG_ON(!obj);
> > +
> > +	if (i915_gem_object_is_lmem(obj))
> > +		intel_runtime_pm_get_vsp(to_i915(obj->base.dev));
> > +
> >  	i915_gem_object_get(obj);
> >  }
> >
> > @@ -1110,6 +1114,10 @@ static void ttm_vm_close(struct vm_area_struct
> *vma)
> >  		i915_ttm_to_gem(vma->vm_private_data);
> >
> >  	GEM_BUG_ON(!obj);
> > +
> > +	if (i915_gem_object_is_lmem(obj))
> > +		intel_runtime_pm_put_vsp(to_i915(obj->base.dev));
> > +
> >  	i915_gem_object_put(obj);
> >  }
> 
> we need to ensure the runtime pm get / put at dma buf attach & detach as well,
> no?
AFAIU dma-buf import does migrate an object from lmem to smem , so effectively the memory mapping access will be 
limited to smem. So there will not be any issue here, unless i am missing something . 

Thanks,
Anshuman Gupta.

> 
> >
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > index 6ed5786bcd29..a5557918674f 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > @@ -646,3 +646,38 @@ void intel_runtime_pm_init_early(struct
> > intel_runtime_pm *rpm)
> >
> >  	init_intel_runtime_pm_wakeref(rpm);
> >  }
> > +
> > +void intel_runtime_pm_get_vsp(struct drm_i915_private *i915) {
> > +	struct pci_dev *pdev = to_pci_dev(i915->drm.dev), *vsp;
> > +
> > +	if (!IS_DGFX(i915))
> > +		return;
> > +
> > +	vsp = pci_upstream_bridge(pdev);
> > +	if (!vsp) {
> > +		drm_err(&i915->drm, "Failed to get the PCI upstream
> bridge\n");
> > +		return;
> > +	}
> > +
> > +	pci_dbg(vsp, "get runtime usage count\n");
> 
> we should always prefer the drm_dbg in our subsystem
This print will be useful in case any leaks from users around munmap.
The device bdf with pci_dbg will be useful in such cases. 
> 
> > +	pm_runtime_get_sync(&vsp->dev);
> 
> why? I believe that grabbing our own ref would be enough to block the
> upstream chain. I don't understand why this is such an special case that we don't
> see any other driver in the linux tree having to do such a thing. what am I
> missing?
IMHO by grabbing our device wakeref will keep it in D0 , with its parent bridge the gfx endpoint 
function can transition to d3(d3hot) to save some power ?

If we are sure there are not any power saving with only gfx endpoint function in D3hot, probably
in that case it will be correct to get the wakeref of our own device.

Thanks,
Anshuman Gupta.   
> 
> > +}
> > +
> > +void intel_runtime_pm_put_vsp(struct drm_i915_private *i915) {
> > +	struct pci_dev *pdev = to_pci_dev(i915->drm.dev), *vsp;
> > +
> > +	if (!IS_DGFX(i915))
> > +		return;
> > +
> > +	vsp = pci_upstream_bridge(pdev);
> > +	if (!vsp) {
> > +		drm_err(&i915->drm, "Failed to get the PCI upstream
> bridge\n");
> > +		return;
> > +	}
> > +
> > +	pci_dbg(vsp, "put runtime usage count\n");
> > +	pm_runtime_mark_last_busy(&vsp->dev);
> > +	pm_runtime_put(&vsp->dev);
> > +}
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h
> > b/drivers/gpu/drm/i915/intel_runtime_pm.h
> > index d9160e3ff4af..b86843bf4f5d 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.h
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.h
> > @@ -173,6 +173,8 @@ void intel_runtime_pm_init_early(struct
> > intel_runtime_pm *rpm);  void intel_runtime_pm_enable(struct
> > intel_runtime_pm *rpm);  void intel_runtime_pm_disable(struct
> > intel_runtime_pm *rpm);  void intel_runtime_pm_driver_release(struct
> > intel_runtime_pm *rpm);
> > +void intel_runtime_pm_get_vsp(struct drm_i915_private *i915); void
> > +intel_runtime_pm_put_vsp(struct drm_i915_private *i915);
> >
> >  intel_wakeref_t intel_runtime_pm_get(struct intel_runtime_pm *rpm);
> > intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm
> > *rpm);
> > --
> > 2.26.2
> >




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux