On Tue, Mar 06, 2018 at 03:26:33PM +0530, Manikanta Maddireddy wrote: > > > On 05-Mar-18 3:11 PM, Lorenzo Pieralisi wrote: > > On Sat, Mar 03, 2018 at 01:47:38PM +0530, Manikanta Maddireddy wrote: > > > > [...] > > > >>>> +static int tegra_pcie_pm_resume(struct device *dev) > >>>> +{ > >>>> + struct tegra_pcie *pcie = dev_get_drvdata(dev); > >>>> + int err; > >>>> + > >>>> + err = tegra_pcie_power_on(pcie); > >>>> + if (err) { > >>>> + dev_err(dev, "tegra pcie power on fail: %d\n", err); > >>>> + return err; > >>>> + } > >>>> + err = tegra_pcie_enable_controller(pcie); > >>>> + if (err) { > >>>> + dev_err(dev, "tegra pcie controller enable fail: %d\n", err); > >>>> + goto poweroff; > >>>> + } > >>>> + tegra_pcie_setup_translations(pcie); > >>>> + > >>>> + if (IS_ENABLED(CONFIG_PCI_MSI)) > >>>> + tegra_pcie_enable_msi(pcie); > >>>> + > >>>> + tegra_pcie_enable_ports(pcie); > >>> > >>> Is it correct to report a successfull resume if some of the ports fail > >>> to come up (whether within runtime PM or system suspend) ? I think that, > >>> if any of the ports fails to come up, returning a failure is more > >>> appropriate here given that the host bridge is a single device as far as > >>> the kernel is concerned. > >>> > >>> Thanks, > >>> Lorenzo > >>> > >> > >> Hi Lorenzo, > >> > >> We(Thierry, Vidya Sagar and myself) had a discussion on this topic > >> and we chose not to fail probe/resume because host controller is > >> initialized properly, so failing the probe/resume is not the right > >> thing to do. > > > > What happens then to endpoints downstream a rootport that failed to > > resume but we nonetheless reported that it successfully resume instead ? > > > > Thanks, > > Lorenzo > > > > Hi Lorenzo, > > If an endpoint comes up in probe & enumeration is completed and if > the same endpoint doesn't come up during resume I am observing > PCIe errors because PCIe enumeration hierarchy is unchanged & > PCI subsystem is trying to restore PCI device. This observation > is same if host driver returns error. > > To handle this case driver needs to remove the respective > PCIe port enumeration hierarchy if link doesn't come up > in resume. It will tricky to identify PCI bus structure > from host port ID in resume, may be some book keeping required > here. > > However if the PCIe link comes up in probe, it should come up > during resume as well else it will be a bug. Do you see the > need for changing the PCIe enumeration hierarchy in resume > based on link up status? I think that for the time being it is fine to merge the current set - I asked just for clarification; I agree there is no easy answer to the issue I raised. It should be safe to merge the series in its current form - I will keep this issue on my radar to see if there is something we can do to improve it. Thanks, Lorenzo > > Thanks, > Manikanta > > >> PCIe link may fail to come up if there is no endpoint in the slot > >> or interoperable failure where endpoint specific quirk or work > >> around required. In such cases host controller driver shouldn't > >> fail. > >> > >> Also user can connect endpoint in only one slot and expect it to be > >> working. Even though host bridge is a single device, driver should > >> allow standalone port to work. > >> > >> Thanks, > >> Manikanta > >> > >>>> > >>>> return 0; > >>>> + > >>>> +poweroff: > >>>> + tegra_pcie_power_off(pcie); > >>>> + > >>>> + return err; > >>>> } > >>>> > >>>> +static const struct dev_pm_ops tegra_pcie_pm_ops = { > >>>> + SET_RUNTIME_PM_OPS(tegra_pcie_pm_suspend, tegra_pcie_pm_resume, NULL) > >>>> + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_pcie_pm_suspend, > >>>> + tegra_pcie_pm_resume) > >>>> +}; > >>>> + > >>>> static struct platform_driver tegra_pcie_driver = { > >>>> .driver = { > >>>> .name = "tegra-pcie", > >>>> .of_match_table = tegra_pcie_of_match, > >>>> .suppress_bind_attrs = true, > >>>> + .pm = &tegra_pcie_pm_ops, > >>>> }, > >>>> .probe = tegra_pcie_probe, > >>>> .remove = tegra_pcie_remove, > >>>> -- > >>>> 2.1.4 > >>>> -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html