Re: [BUG] nvme-pci: NVMe probe fails with ENODEV

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[+cc Aleksander, original report at
https://lore.kernel.org/r/975cc790-7dd9-4902-45c1-c69b4be9ba3a@xxxxxxxxxxxxxxx]

On Thu, Mar 09, 2023 at 07:34:18PM +0530, Rajat Khandelwal wrote:
> On 3/9/2023 7:31 PM, Rajat Khandelwal wrote:
> > Hi,
> > I am seeking some help regarding an issue I encounter sporadically
> > with Samsung Portable TBT SSD X5.
> > 
> > Right from the thunderbolt discovery to the PCIe enumeration, everything
> > is fine, until 'NVME_REG_CSTS' is tried to be read in 'nvme_reset_work'.
> > Precisely, 'readl(dev->bar + NVME_REG_CSTS)' fails.

> > I handle type-C, thunderbolt and USB4 on Chrome platforms, and currently
> > we are working on Intel Raptorlake systems.
> > This issue has been witnessed from ADL time-frame and now is seen
> > on RPL as well. I would really like to get to the bottom of the problem
> > and close the issue.
> > 
> > I have tried 5.10 and 6.1.15 kernels.

It's intermittent, but happens on both v5.10 and v6.1.15.  So we have
no reason to think this is a regression, right?

And you see it on ADL and RPL?  Do you see it on any other platforms?
Have you tried any others?

> > During the issue:
> > Contents of BAR-0: <garbage> 00000004 (dumped using setpci)
> > Contents of kernel PCI resource-0: 0x83000000 (matches with the mem allocation)
> > Issue: nvme nvme1: Removing after probe failure status: -19

How exactly did you use setpci and what was "<garbage>"?  Can you
include the entire transcript, e.g.,

  $ setpci -G -s 01:00.0 BASE_ADDRESS_0.L
  Trying method linux-sysfs......using /sys/bus/pci...OK
  Decided to use linux-sysfs
  ec000000

What does "lspci -vvxxx" show in this case?

I guess "kernel PCI resource-0: 0x83000000" means the following from
your dmesg log, right?

  pci 0000:03:00.0: BAR 0: assigned [mem 0x83000000-0x83003fff 64bit]

I think the first access to the device should be here (same as what
Keith said):

  nvme_probe
    nvme_pci_enable
      pci_enable_device_mem
      pci_set_master
      readl(dev->bar + NVME_REG_CSTS)

But you mention nvme_reset_work() above.  How did you figure that out?

Maybe there's a race where we reset the device (which clears the BARs)
and do MMIO accesses before the BARs are restored.

Or maybe some PCI error happens and nvme_reset_work() is invoked as
part of recovery?  I see some *corrected* AER errors in your log, but
none look related to your NVMe device at 03:00.0.

I assume reading the BAR with setpci happens in "slow user time" so we
have to assume that's the steady state of the BAR after nvme_probe()
fails with -19.

> > During a working case:
> > Contents of BAR-0: 83000004 (dumped using setpci)
> > 
> > Seems like, the kernel PCIe resource contents don't change (which results in a
> > successful ioremap), but somehow the BAR-0 dumps garbage.
> > 
> > The logs for the scenario: (apologies if this is not the way to attach a log in
> > the mailing list as I have never done that :)).

> > ... (see original report at
> > https://lore.kernel.org/r/975cc790-7dd9-4902-45c1-c69b4be9ba3a@xxxxxxxxxxxxxxx)

Bjorn



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux