On Wed, Oct 30, 2019 at 11:14 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > [+cc Heiner, Rajat] > > On Tue, Oct 29, 2019 at 05:31:18PM +0800, Dilip Kota wrote: > > On 10/22/2019 8:59 PM, Bjorn Helgaas wrote: > > > [+cc Rafael, linux-pm, beginning of discussion at > > > https://lore.kernel.org/r/d8574605f8e70f41ce1e88ccfb56b63c8f85e4df.1571638827.git.eswara.kota@xxxxxxxxxxxxxxx] > > > > > > On Tue, Oct 22, 2019 at 05:27:38PM +0800, Dilip Kota wrote: > > > > On 10/22/2019 1:18 AM, Bjorn Helgaas wrote: > > > > > On Mon, Oct 21, 2019 at 02:38:50PM +0100, Andrew Murray wrote: > > > > > > On Mon, Oct 21, 2019 at 02:39:20PM +0800, Dilip Kota wrote: > > > > > > > PCIe RC driver on Intel Gateway SoCs have a requirement > > > > > > > of changing link width and speed on the fly. > > > > > Please add more details about why this is needed. Since you're adding > > > > > sysfs files, it sounds like it's not actually the *driver* that needs > > > > > this; it's something in userspace? > > > > We have use cases to change the link speed and width on the fly. > > > > One is EMI check and other is power saving. Some battery backed > > > > applications have to switch PCIe link from higher GEN to GEN1 and > > > > width to x1. During the cases like external power supply got > > > > disconnected or broken. Once external power supply is connected then > > > > switch PCIe link to higher GEN and width. > > > That sounds plausible, but of course nothing there is specific to the > > > Intel Gateway, so we should implement this generically so it would > > > work on all hardware. > > Agree. > > > > > > I'm not sure what the interface should look like -- should it be a > > > low-level interface as you propose where userspace would have to > > > identify each link of interest, or is there some system-wide > > > power/performance knob that could tune all links? Cc'd Rafael and > > > linux-pm in case they have ideas. > > > > To my knowledge sysfs is the appropriate way to go. > > If there are any other best possible knobs, will be helpful. > > I agree sysfs is the right place for it; my question was whether we > should have files like: > > /sys/.../0000:00:1f.3/pcie_speed > /sys/.../0000:00:1f.3/pcie_width > > as I think this patch would add (BTW, please include sample paths like > the above in the commit log), or whether there should be a more global > thing that would affect all the links in the system. > > I think the low-level files like you propose would be better because > one might want to tune link performance differently for different > types of devices and workloads. > > We also have to decide if these files should be associated with the > device at the upstream or downstream end of the link. For ASPM, the > current proposal [1] has the files at the downstream end on the theory > that the GPU, NIC, NVMe device, etc is the user-recognizable one. > Also, neither ASPM nor link speed/width make any sense unless there > *is* a device at the downstream end, so putting them there > automatically makes them visible only when they're useful. > > Rafael had some concerns about the proposed ASPM interface [2], but I > don't know what they are yet. I was talking about the existing ASPM interface in sysfs. The new one I still have to review, but I'm kind of wondering what about people who used the old one? Would it be supported going forward? > For ASPM we added a "link_pm" directory, and maybe that's too > specific. Maybe it should be a generic "link_mgt" or even "pcie" > directory that could contain both the ASPM and width/speed files. > > There's also a change coming to put AER stats in something like this: > > /sys/.../0000:00:1f.3/aer_stats/correctable_rx_err > /sys/.../0000:00:1f.3/aer_stats/correctable_timeout > /sys/.../0000:00:1f.3/aer_stats/fatal_TLP > ... > > It would certainly be good to have some organizational scheme or we'll > end up with a real hodge-podge. > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?h=pci/aspm&id=ad46fe1c733656611788e2cd59793e891ed7ded7 > [2] https://lore.kernel.org/r/CAJZ5v0jdxR4roEUC_Hs3puCzGY4ThdLsi_XcxfBUUxqruP4z7A@xxxxxxxxxxxxxx