Re: [PATCH 1/1] PCI: Use the correct bit in Link Training not active check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 6 Mar 2024, Maciej W. Rozycki wrote:
> On Mon, 4 Mar 2024, Ilpo Järvinen wrote:
> 
> > > > Since waiting for Data Link Layer Link Active bit is only used for the
> > > > Target Speed quirk, this only impacts the case when the quirk attempts
> > > > to restore speed to higher than 2.5 GT/s (The link is Up at that point
> > > > so pcie_retrain_link() will fail).
> > > 
> > >  NAK.  It's used for both clamping and unclamping and it will break the 
> > > workaround, because the whole point there is to wait until DLLA has been 
> > > set.  Using LT is not reliable because it will oscillate in the failure 
> > > case and seeing the bit clear does not mean link has been established.  
> > 
> > In pcie_retrain_link(), there are two calls into 
> > pcie_wait_for_link_status() and the second one of them is meant to 
> > implement the link-has-been-established check.
> > 
> > The first wait call comes from e7e39756363a ("PCI/ASPM: Avoid link 
> > retraining race") and is just to ensure the link is not ongoing retraining 
> > to make sure the latest configuration in captured as required by the 
> > implementation note. LT being cleared is exactly what is wanted for that 
> > check because it means that any earlier retraining has ended (a new one 
> > might be starting but that doesn't matter, we saw it cleared so the new 
> > configuration should be in effect for that instance of link retraining).
> > 
> > So my point is, the first check is not even meant to check that link has 
> > been established.
> 
>  I see what you mean, and I now remember the note in the spec.  I had 
> concerns about it, but did not do anything about it at that point.
> 
>  I think we still have no guarantee that LT will be clear at the point we 
> set RL, because LT could get reasserted by hardware between our read and 
> the setting of RL.
>
> IIUC that doesn't matter really, because the new link 
> parameters will be taken into account regardless of whether retraining was
> initiated by hardware in an attempt to do link recovery or triggered by 
> software via RL.

I, too, was somewhat worried about having LT never clear for long enough 
to successfully sample it during the wait but it's like you say, any new 
link training should take account the new Target Speed which should 
successfully bring the link up (assuming the quirk works in the first 
place) and that should clear LT.

> > Of course, I cannot test this with your configuration so I cannot 
> > confirm how the target speed quirk behaves, I just found it by reading the 
> > code. The current code does not make sense because the first wait is 
> > supposed to wait for LT bit, not for DLLLA.
> 
>  I think I now understand the problem correctly, and indeed from master 
> Linux repo's point of view it's been a defect with the merge referred.  
> So I withdraw my objection.  Sorry about the confusion.

Thanks, and the confusion was entirely my fault caused my confusing 
and partly wrong wording.

>  However I think the last paragraph of your commit description:
> 
> > Since waiting for Data Link Layer Link Active bit is only used for the
> > Target Speed quirk, this only impacts the case when the quirk attempts
> > to restore speed to higher than 2.5 GT/s (The link is Up at that point
> > so pcie_retrain_link() will fail).
> 
> is not accurate enough, which contributed to my confusion.  In particular 
> `pcie_retrain_link' succeeds when the link is up, because it calls 
> `pcie_wait_for_link_status' such as to succeed when either LT is clear or 
> DLLLA is set.

I know, it was simply wrong because I somehow misthought which way those 
use_lt negations worked (and thus thought I found a different problem 
than there really was).

I already did major rewriting here to explain out how the code ended up 
into its current shape but I'll also consider your explanation below 
which is likely better than what I've currently, thanks.

>  How about:
> 
> This only impacts the Target Speed quirk, which is the only case where 
> waiting for Data Link Layer Link Active bit is used.  It currently works 
> in the problematic case by means of link training getting initiated by 
> hardware repeatedly and respecting the new link parameters set by the 
> caller, which then make training succeed and bring the link up, setting 
> DLLLA and causing pcie_wait_for_link_status() to return success.  We are 
> not supposed to rely on luck and need to make sure that LT transitioned
> through the inactive state though before we initiate link training by 
> hand via RL.
>
> then?
> 
>  Also for `pcie_retrain_link' I think it would be clearer if we rephrased 
> the note as follows:
> 
> 	 * Ensure the updated LNKCTL parameters are used during link
> 	 * training by checking that there is no ongoing link training
> 	 * that may have started before link parameters were changed, so
> 	 * as to avoid LTSSM race as recommended in Implementation Note
> 	 * at the end of PCIe r6.0.1 sec 7.5.3.7.
> 
>  NB I am currently away on holiday until the end of next week.  However I 
> do have access to my lab and I'll see if I can verify your change tonight
> or another evening this week, and overall thank you for your patience.

Don't worry. I can easily wait even until you're back from the holiday
so no need to push it or anything. :-)

-- 
 i.

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux