Re: PCI: Work around PCIe link training failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 15 Aug 2024, Matthew W Carlis wrote:

> > Well, in principle in a setup with reliable links the LBMS bit may never 
> > be set, e.g. this system of mine has been in 24/7 operation since the last 
> > reboot 410 days ago and for the devices that support Link Active reporting 
> > it shows:
> > ...
> > so out of 11 devices 6 have the LBMS bit clear.  But then 5 have it set, 
> > perhaps worryingly, so of course you're right, that it will get set in the 
> > field, though it's not enough by itself for your problem to trigger.
> 
> The way I look at it is that its essentially a probability distribution with time,
> but I try to avoid learning too much about the physical layer because I would find
> myself debugging more hardware issues lol. I also don't think LBMS/LABS being set
> by itself is very interesting without knowing the rate at which it is being set.

 Agreed.  Ilpo's upcoming bandwidth controller will hopefully give us such 
data.

> FWIW I have seen some devices in the past going into recovery state many times a
> second & still never downtrain, but at the same time they were setting the
> LBMS/LABS bits which maybe not quite spec compliant.
> 
> I would like to help test these changes, but I would like to avoid having to test
> each mentioned change individually. Does anyone have any preferences in how I batch
> the patches for testing? Would it be ok if I just pulled them all together on one go?

 Certainly fine with me, especially as 3/4 and 4/4 aren't really related 
to your failure scenario, and then you need 1/4 and 2/4 both at a time to 
address both aspects of the issue you have reported.

  Maciej




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux