Re: How to correctly define memory range of PCIe config space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 6, 2022 at 6:23 AM Pali Rohár <pali@xxxxxxxxxx> wrote:
>
> On Saturday 06 August 2022 17:46:14 Manivannan Sadhasivam wrote:
> > On Sat, Aug 06, 2022 at 01:17:02PM +0200, Pali Rohár wrote:
> > > On Saturday 06 August 2022 16:36:13 Manivannan Sadhasivam wrote:
> > > > Hi Pali,
> > > >
> > > > On Mon, Jul 11, 2022 at 12:51:08AM +0200, Pali Rohár wrote:
> > > > > Hello!
> > > > >
> > > > > Together with Mauri we are working on extending pci-mvebu.c driver to
> > > > > support Orion PCIe controllers as these controllers are same as mvebu
> > > > > controller.
> > > > >
> > > > > There is just one big difference: Config space access on Orion is
> > > > > different. mvebu uses classic Intel CFC/CF8 registers for indirect
> > > > > config space access but Orion has direct memory mapped config space.
> > > > > So Orion DTS files need to have this memory range for config space and
> > > > > pci-mvebu.c driver have to read this range from DTS and properly map it.
> > > > >
> > > > > So my question is: How to properly define config space range in device
> > > > > tree file? In which device tree property and in which format? Please
> > > > > note that this memory range of config space is PCIe root port specific
> > > > > and it requires its own MBUS_ID() like memory range of PCIe MEM and PCIe
> > > > > IO mapping. Please look e.g. at armada-385.dtsi how are MBUS_ID() used:
> > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm/boot/dts/armada-385.dtsi
> > > > >
> > > >
> > > > On most of the platforms, the standard "reg" property is used to specify the
> > > > config space together with other device specific memory regions. For instance,
> > > > on the Qcom platforms based on Designware IP, we have below regions:
> > > >
> > > >       reg = <0xfc520000 0x2000>,
> > > >             <0xff000000 0x1000>,
> > > >             <0xff001000 0x1000>,
> > > >             <0xff002000 0x2000>;
> > > >       reg-names = "parf", "dbi", "elbi", "config";
> > > >
> > > > Where "parf" and "elbi" are Qcom controller specific regions, while "dbi" and
> > > > "config" (config space) are common to all Designware IPs.
> > > >
> > > > These properties are documented in: Documentation/devicetree/bindings/pci/qcom,pcie.yaml
> > > >
> > > > Hope this helps!
> > >
> > > Hello! I have already looked at this. But as I pointed in above
> > > armada-385.dtsi file, mvebu is quite complicated. First it does not use
> > > explicit address ranges, but rather macros MBUS_ID() which assign
> > > addresses at kernel runtime by mbus driver. Second issue is that config
> > > space range (like any other resources) are pcie root port specific. So
> > > it cannot be in pcie controller node and in pcie devices is "reg"
> > > property reserved for pci bdf address.
> > >
> > > In last few days, I spent some time on this issue and after reading lot
> > > of pcie dts files, including bindings and other documents (including
> > > open firmware pci2_1.pdf) and I'm proposing following definition:
> > >
> > > soc {
> > >   pcie-mem-aperture = <0xe0000000 0x08000000>; /* 128 MiB memory space */
> > >   pcie-cfg-aperture = <0xf0000000 0x01000000>; /*  16 MiB config space */
> > >   pcie-io-aperture  = <0xf2000000 0x00100000>; /*   1 MiB I/O space */
> > >
> > >   pcie {
> > >     ranges = <0x82000000 0 0x40000     MBUS_ID(0xf0, 0x01) 0x40000  0x0 0x2000>,    /* Port 0.0 Internal registers */
> > >              <0x82000000 0 0xf0000000  MBUS_ID(0x04, 0x79) 0        0x0 0x1000000>, /* Port 0.0 Config space */
> > >              <0x82000000 1 0x0         MBUS_ID(0x04, 0x59) 0        0x1 0x0>,       /* Port 0.0 Mem */
> > >              <0x81000000 1 0x0         MBUS_ID(0x04, 0x51) 0        0x1 0x0>,       /* Port 0.0 I/O */
> > >
> > >     pcie@1,0 {
> > >       reg = <0x0800 0 0 0 0>; /* BDF 0:1.0 */
> > >       assigned-addresses =     <0x82000800 0 0x40000     0x0 0x2000>,     /* Port 0.0 Internal registers */
> > >                                <0x82000800 0 0xf0000000  0x0 0x1000000>;  /* Port 0.0 Config space */
> > >       ranges = <0x82000000 0 0  0x82000000 1 0           0x1 0x0>,        /* Port 0.0 Mem */
> > >                 0x81000000 0 0  0x81000000 1 0           0x1 0x0>;        /* Port 0.0 I/O */
> > >     };
> > >   };
> > > };
> > >
> > > So the pci config space address range would be defined in
> > > "assigned-addresses" property as the _second_ value. First value is
> > > already used for specifying internal registers (similar what is "parf"
> > > for qcom).
> > >
> >
> > Sounds reasonable to me. Another option would be to introduce a mvebu specific
> > property but that would be the least preferred option I guess.
> >
> > But the fact that "assigned-addresses" property is described as "MMIO registers"
> > also adds up to the justification IMO.
> >
> > Rob/Krzysztof could always correct that during binding review.
>
> Ok!
>
> > > config space is currently limited to 16 MB (without extended PCIe), but
> > > after we find free continuous physical address window of size 256MB we
> > > can extend it to full PCIe config space range.
> > >
> > > Any objections to above device tree definition?
> > >
> >
> > Are you also converting the binding to YAML for validation?
>
> I still have an issue to understand YAML scheme declaration and do not
> know how to express all those properties in this scheme language
> correctly. Also I was not able to setup infrastructure for running
> scheme binding tests. So I'm currently not planning to do this.

What's the issue exactly with installing? 'pip install dtschema'
doesn't work for you?

> It would be really a good idea to provide some web service where people
> could upload their work-in-progress DTS files and YAML schemes for
> automatic validation.

You can send patches and they get tested. That works until I get
annoyed that you aren't testing your patches as I review the results
and testing runs on my h/w (each patch is 5-15mins). If someone wants
to donate h/w for testing, I'd be happy to provide un-reviewed, fully
automated results. I just need a gitlab runner(s) with docker to point
the CI job to.

There's already several json-schema validator websites. Standing one
up for our specific needs probably wouldn't be too hard.

Rob




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux