Re: [RFC PATCH] PCI: hotplug: Fix surprise removal report card present and link failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 17, 2019 at 08:07:13PM +0800, Dongdong Liu wrote:
> ??? 2019/1/16 22:22, Lukas Wunner ??????:
> > On Wed, Jan 16, 2019 at 10:31:04PM +0800, Dongdong Liu wrote:
> > > The lspci -tv topology is as below.
> > >  +-[0000:80]-+-00.0-[81]----00.0  Huawei Technologies Co., Ltd. Device 3714
> > >  |           +-02.0-[82]----00.0  Huawei Technologies Co., Ltd. Device 3714
> > >  |           +-04.0-[83]----00.0  Huawei Technologies Co., Ltd. Device 3714
> > >  |           +-06.0-[84]----00.0  Huawei Technologies Co., Ltd. Device 3714
> > >  |           +-10.0-[87]----00.0  Huawei Technologies Co., Ltd. Device 3714
> > > 
> > > Then surprise removal 87:00.0 NVME SSD card. The message is as below.
> > > 
> > > pciehp 0000:80:10.0:pcie004: Slot(36): Link Down
> > > iommu: Removing device 0000:87:00.0 from group 12
> > > pciehp 0000:80:10.0:pcie004: Slot(36): Card present
> > > pcieport 0000:80:10.0: Data Link Layer Link Active not set in 1000 msec
> > > pciehp 0000:80:10.0:pcie004: Failed to check link status
> > 
> > What is the problem that you're trying to fix?  That these messages
> > are logged?  Or is there a bigger issue?  If the only problem are the
> > messages, then I feel that the current behavior is a feature, not a bug.
> > We could probably tone down the "Failed to check link status" message's
> > severity.  (Currently it's KERN_ERR, all the other messages are KERN_INFO.)
> 
> Yes, the only problem is the messages, looks not good,
> as the card have been removed from board, the message still show
> card present and failed to check link status.
> Only tone down the "Failed to check link status" message's severity
> seems not good enough.

Well, getting messages like this is par for the course with PCIe hotplug.

E.g. some older Thunderbolt controllers do not support MSI on their
hotplug ports, but only INTx.  If multiple such devices are daisy-
chained, they'll share an interrupt, so whenever a device is hot-removed,
a "pciehp_isr: no response from device" message is logged with
KERN_INFO severity because the hot-removed device was inaccessible
for its interrupt handler.  The interrupt didn't come from the
hot-removed device of course but from another one further upstream
in the daisy-chain where the plug event occurred.  We can't do much
better with such broken hardware.

The reason you're seeing messages is because it takes an unusually
long time for the controller to clear the Presence Detect State bit
after a Data Link Layer State Changed event upon hot-removal.
That's arguably a quirk of the hardware you're dealing with.

pciehp cannot tell whether the Presence Detect State bit is set
because a new card is already present in the slot or if it's trailing
hot-removal and will be cleared shortly.  The protocol doesn't allow
for a clear disambiguation, so pciehp copes by optimistically trying
to bring up the slot, and giving up after a certain delay.

There is other quirky hardware out there which flaps the Presence
Detect State and Data Link Layer Link Active bits a couple of times
before they become stable, which is why pciehp needs to try for a
certain period to bring up the slot.

Again, we could probably tone down or remove some of the messages,
but that might make it harder to diagnose when something really
doesn't work.  It's Bjorn's call anyway.

Thanks,

Lukas



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux