On Tue, Jun 13, 2017 at 09:58:22AM +0530, Oza Oza wrote: > On Tue, Jun 13, 2017 at 5:00 AM, Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > Please wrap your changelogs to use 75 columns. "git log" indents the > > changelog by four spaces, so if your text is 75 wide, it will still > > fit without wrapping. > > > > On Sun, Jun 11, 2017 at 09:35:37AM +0530, Oza Pawandeep wrote: > >> For Configuration Requests only, following reset > >> it is possible for a device to terminate the request > >> but indicate that it is temporarily unable to process > >> the Request, but will be able to process the Request > >> in the future – in this case, the Configuration Request > >> Retry Status 10 (CRS) Completion Status is used > > > > How does this relate to the CRS support we already have in the core, > > e.g., pci_bus_read_dev_vendor_id()? It looks like your root complex > > already returns 0xffff0001 (CFG_RETRY_STATUS) in some cases. > > > > Also, per spec (PCIe r3.1, sec 2.3.2), CRS Software Visibility only > > affects config reads of the Vendor ID, but you call > > iproc_pcie_cfg_retry() for all config offsets. > > Yes, as per Spec, CRS Software Visibility only affects config read of > the Vendor ID. > For config write or any other config read the Root must automatically > re-issue configuration > request again as a new request, and our PCIe RC fails to do so. OK, if this is a workaround for a hardware defect, let's make that explicit in the changelog (and probably a comment in the code, too). I'm actually not sure the spec *requires* the CRS retries to be done directly in hardware, so it's conceivable the hardware could be working as designed. But a comment would go a long way toward making this understandable by differentiating it from the generic CRS handling in the core. Bjorn