Re: [PATCH] PCI: vmd: Allow VMD PM to use PCI core PM code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2020-08-05 at 10:09 -0500, Bjorn Helgaas wrote:
> [+cc Vaibhav, Rafael for suspend/resume question]
> 
> On Fri, Jul 31, 2020 at 01:15:44PM -0400, Jon Derrick wrote:
> > The pci_save_state call in vmd_suspend can be performed by
> > pci_pm_suspend_irq. This allows the call to pci_prepare_to_sleep into
> > ASPM flow.
> 
> Add "()" after function names so they don't look like English words.
> 
> What is this "ASPM flow"? 
(Forgive my misunderstanding)

>  The only ASPM-related code should be
> configuration (enable/disable ASPM) (which happens at
> enumeration-time, not suspend/resume time) and save/restore if we turn
> the device off and we have to reconfigure it when we turn it on again.
So in the suspend path we gain pci_prepare_to_sleep() by not pre-saving 
state.

The big component here is that we are a Non-ACPI device/domain on an
ACPI PM manageable system, so I'm not certain if we're missing some
platform critical elements here in pci_platform_power_* functions.

> > The pci_restore_state call in vmd_resume was restoring state after
> > pci_pm_resume->pci_restore_standard_config had already restored state.
> > It's also been suspected that the config state should be restored before
> > re-requesting IRQs.
> > 
> > Remove the pci_{save,restore}_state calls in vmd_{suspend,resume} in
> > order to allow proper flow through PCI core power management ASPM code.
> > 
> > Cc: Kai-Heng Feng <kai.heng.feng@xxxxxxxxxxxxx>
> > Cc: You-Sheng Yang <vicamo.yang@xxxxxxxxxxxxx>
> > Signed-off-by: Jon Derrick <jonathan.derrick@xxxxxxxxx>
> > ---
> >  drivers/pci/controller/vmd.c | 2 --
> >  1 file changed, 2 deletions(-)
> > 
> > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> > index 76d8acbee7d5..15c1d85d8780 100644
> > --- a/drivers/pci/controller/vmd.c
> > +++ b/drivers/pci/controller/vmd.c
> > @@ -719,7 +719,6 @@ static int vmd_suspend(struct device *dev)
> >  	for (i = 0; i < vmd->msix_count; i++)
> >  		devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]);
> >  
> > -	pci_save_state(pdev);
> 
> The VMD driver uses generic PM, not legacy PCI PM, so I think removing
> the save/restore state from your suspend/resume functions is the right
> thing to do.  You should only need to do VMD-specific things there.
> 
> I'm not even sure you need to free/request the IRQs in your
> suspend/resume.  Maybe Rafael or Vaibhav know.
The history on that was due to too many VMD instances consuming too
many IRQs for the suspend path when moving IRQs from CPUs being
offlined:

commit e2b1820bd5d0962d6f271b0d47c3a0e38647df2f
Author: Scott Bauer <scott.bauer@xxxxxxxxx>
Date:   Fri Aug 11 14:54:32 2017 -0600

    PCI: vmd: Free up IRQs on suspend path
    
    Free up the IRQs we request on the suspend path and reallocate them on the
    resume path.
    
    Fixes this error:
    
      CPU 111 disable failed: CPU has 9 vectors assigned and there are only 0 available.
      Error taking CPU111 down: -34
      Non-boot CPUs are not disabled
      Enabling non-boot CPUs ...

> 
> I just think the justification in the commit log is wrong.
> 
> >  	return 0;
> >  }
> >  
> > @@ -737,7 +736,6 @@ static int vmd_resume(struct device *dev)
> >  			return err;
> >  	}
> >  
> > -	pci_restore_state(pdev);
> >  	return 0;
> >  }
> >  #endif
> > -- 
> > 2.18.1
> > 




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux