On Wed, Feb 26, 2025 at 6:28 PM Naveen Kumar P <naveenkumar.parna@xxxxxxxxx> wrote: > > On Wed, Feb 26, 2025 at 2:08 AM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > On Tue, Feb 25, 2025 at 06:46:02PM +0530, Naveen Kumar P wrote: > > > On Tue, Feb 25, 2025 at 1:24 AM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > On Tue, Feb 25, 2025 at 12:29:00AM +0530, Naveen Kumar P wrote: > > > > > On Mon, Feb 24, 2025 at 11:03 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > > > On Mon, Feb 24, 2025 at 05:45:35PM +0530, Naveen Kumar P wrote: > > > > > > > On Wed, Feb 19, 2025 at 10:36 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > > > > > On Wed, Feb 19, 2025 at 05:52:47PM +0530, Naveen Kumar P wrote: > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > > > I am writing to seek assistance with an issue we are experiencing with > > > > > > > > > a PCIe device (PLDA Device 5555) connected through PCI Express Root > > > > > > > > > Port 1 to the host bridge. > > > > > > > > > > > > > > > > > > We have observed that after booting the system, the Base Address > > > > > > > > > Register (BAR0) memory of this device gets reset to 0x0 after > > > > > > > > > approximately one hour or more (the timing is inconsistent). This was > > > > > > > > > verified using the lspci output and the setpci -s 01:00.0 > > > > > > > > > BASE_ADDRESS_0 command. > > > > > > > > > ... > > > > > I booted with the pcie_aspm=off kernel parameter, which means that > > > > > PCIe Active State Power Management (ASPM) is disabled. Given this > > > > > context, should I consider removing this setting to see if it affects > > > > > the occurrence of the Bus Check notifications and the BAR0 reset > > > > > issue? > > > > > > > > Doesn't seem likely to be related. Once configured, ASPM operates > > > > without any software intervention. But note that "pcie_aspm=off" > > > > means the kernel doesn't touch ASPM configuration at all, and any > > > > configuration done by firmware remains in effect. > > > > > > > > You can tell whether ASPM has been enabled by firmware with "sudo > > > > lspci -vv" before the problem occurs. > > > > > > > > > > > During the ACPI_NOTIFY_BUS_CHECK event, the lspci output initially > > > > > > > showed all FF's, and then the next run of the same command showed > > > > > > > BASE_ADDRESS_0 reset to zero: > > > > > > > $ sudo lspci -xxx -s 01:00.0 | grep "10:" > > > > > > > 10: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff > > > > > > > > > > > > Looks like the device isn't responding at all here. Could happen if > > > > > > the device is reset or powered down. > > > > > > > > > > From the kernel driver or user space tools, is it possible to > > > > > determine whether the device has been reset or powered down? Are > > > > > there any power management settings or configurations that could be > > > > > causing the device to reset or power down unexpectedly? > > > > > > > > Not really. By "powered down", I meant D3cold, where the main power > > > > is removed. Config space is readable in all other power states. > > > > > > > > > > What is this device? What driver is bound to it? I don't see > > > > > > anything in dmesg that identifies a driver. > > > > > > > > > > The PCIe device in question is a Xilinx FPGA endpoint, which is > > > > > flashed with RTL code to expose several host interfaces to the system > > > > > via the PCIe link. > > > > > > > > > > We have an out-of-tree driver for this device, but to eliminate the > > > > > driver's role in this issue, I renamed the driver to prevent it from > > > > > loading automatically after rebooting the machine. Despite not using > > > > > the driver, the issue still occurred. > > > > > > > > Oh, right, I forgot that you mentioned this before. > > > > > > > > > > You're seeing the problem on v5.4 (Nov 2019), which is much newer than > > > > > > v4.4 (Jan 2016). But v5.4 is still really too old to spend a lot of > > > > > > time on unless the problem still happens on a current kernel. > > > > > > > > This part is important. We don't want to spend a lot of time > > > > debugging an issue that may have already been fixed upstream. > > > > > > Sure, I started building the 6.13 kernel and will post more > > > information if I notice the issue on the 6.13 kernel. > I have downloaded the 6.13 kernel source and added additional debug > logs in hotplug_event(), then built the kernel. After that rebooted > with the new kernel using the following parameters: > BOOT_IMAGE=/vmlinuz-6.13.0+ root=/dev/mapper/vg00-rootvol ro quiet > libata.force=noncq pci=nomsi pcie_aspm=off pcie_ports=on "dyndbg=file > drivers/pci/* +p; file drivers/acpi/* +p" > > After some time post-boot, I ran the following commands without > initially checking the dmesg log: > $sudo lspci -xxx -s 01:00.0 | grep "10:" > 10: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff > > $sudo lspci -xxx -s 01:00.0 | grep "10:" > 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > The first run of lspci showed all FF's, and the next run showed > BASE_ADDRESS_0 reset to zero. After observing this, I checked the > dmesg log and found the following information: > > [ 2434.267810] ACPI: GPE event 0x01 > [ 2434.374249] ACPI: \_SB_.PCI0.RP01: ACPI: ACPI_NOTIFY_BUS_CHECK event > [ 2434.374375] ACPI: \_SB_.PCI0.RP01: ACPI: OSL: Scheduling hotplug > event 0 for deferred handling > [ 2434.376001] ACPI: \_SB_.PCI0.RP01: acpiphp_glue: Bridge acquired in > hotplug_event() > [ 2434.376125] ACPI: \_SB_.PCI0.RP01: acpiphp_glue: Bus check in hotplug_event() > [ 2434.376268] ACPI: \_SB_.PCI0.RP01: acpiphp_glue: Checking bridge in > hotplug_event() > [ 2434.376615] ACPI: \_SB_.PCI0.RP01.PXSX: ACPI: utils: Evaluate > [_STA]: AE_NOT_FOUND > [ 2434.377652] ACPI: \_SB_.PCI0.RP01.PXSX._ADR: ACPI: No context! > [ 2434.379715] ACPI: \_SB_.PCI0.RP01.PXSX._PRW: ACPI: No context! > [ 2434.383699] ACPI: \_SB_.PCI0.RP01.PXSX: ACPI: utils: Evaluate > [_STA]: AE_NOT_FOUND > [ 2434.383723] ACPI: Device [PXSX] status [0000000f] > [ 2434.386059] ACPI: \_SB_.PCI0.RP01.D015._ADR: ACPI: No context! > [ 2434.388332] ACPI: \_SB_.PCI0.RP01.D015: ACPI: utils: Evaluate > [_STA]: AE_NOT_FOUND > [ 2434.388354] ACPI: Device [D015] status [0000000f] > [ 2434.388857] ACPI: \_SB_.PCI0.RP01: acpiphp_glue: Releasing bridge > in hotplug_event() > [ 2434.592773] ACPI: \_SB_.PCI0.SBRG.ADP1: ACPI: utils: Return value [1] > [ 2450.241979] ACPI: \_SB_.PCI0.SBRG.ADP1: ACPI: utils: Return value [1] > [ 2451.897846] ACPI: \_SB_.PCI0.SBRG.ADP1: ACPI: utils: Return value [1] > > Prior to this and afterwards, the dmesg log was flooded with "ACPI: > _SB_.PCI0.SBRG.ADP1: ACPI: utils: Return value [1]" statements. > > Complete dmesg log and the patch(to get additional debug information) > are attached to this email. I would greatly appreciate any guidance or next steps you could provide to help debug this issue. > > Any further guidance on these observations? > > Additionally, I noticed that the initial bootup logs with the > "0.000000" timestamp are missing in the dmesg log with this new > kernel. I'm unsure what might be causing this issue. > > > > > > > Regarding the CommClk- (Common Clock Configuration) bit, it indicates > > > whether the common clock configuration is enabled or disabled. When it > > > is set to CommClk-, it means that the common clock configuration is > > > disabled. > > > > > > LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk- > > > ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- > > > > > > For my device, I noticed that the common clock configuration is > > > disabled. Could this be causing the BAR reset issue? > > > > Not to my knowledge. > > > > > How is the CommClk bit determined(to set or clear)? and is it okay to > > > enable this bit after booting the kernel? > > > > It is somewhere in drivers/pci/pcie/aspm.c, i.e., > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/pcie/aspm.c?id=v6.13#n383