Re: [RFC 2/2] dt-bindings: firmware: tegra186-bpmp: Document interconnects property

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 17, 2020 at 05:23:43PM +0200, Georgi Djakov wrote:
> Hi Thierry,
> 
> Thanks for the patch!
> 
> On 1/14/20 20:15, Thierry Reding wrote:
> > From: Thierry Reding <treding@xxxxxxxxxx>
> > 
> > Document the interconnects property that is used to describe the paths
> > from and to system memory from and to the BPMP.
> > 
> > Signed-off-by: Thierry Reding <treding@xxxxxxxxxx>
> > ---
> > Rob, Georgi,
> > 
> > after the initial RFC that I did for adding interconnect properties on
> > Tegra, I realized that the description wasn't complete. This is an
> > attempt at a more accurate description, but unfortunately I'm not sure
> > if it's even correct in terms of the interconnect bindings.
> > 
> > The problem here is that on Tegra, each device has multiple paths to
> > system memory, and I have no good idea on what to pick as the default.
> > They are all basically the same path, but each provides extra controls
> > to configure the "interconnect".
> 
> Are these multiple paths between a device and system memory used simultaneously
> for load-balancing, or who makes the decision about which path would be used?

It varies. The vast majority of these paths are read/write pairs, which
can be configured separately. There are also cases where multiple paths
are used for load-balancing and I don't think there's any direct
software control over which path will be used.

A third class is where you have one device, but two read/write pairs,
one which is tied to a microcontroller that's part of the device, and
another read/write pair that is used for DMA to/from the device.

Often in the latter case, the microcontroller memory client interfaces
will be used by the microcontroller to read firmware and once the micro-
controller has booted up, the DMA memory client interfaces will be used
to read/write system memory with bulk data (like frame buffers, etc.).

> Is this based on the client/stream ID that you mentioned previously?

These are now all what's call memory client IDs, which identify the
corresponding interface to the memory controller. Stream IDs are
slightly higher-level and typically identify the "module" that uses
the SMMU. Generally a stream ID is mapped to one or more memory client
IDs.

> Looking at the the binding below, it seems to me like there are different
> master/slave pairs between MC and EMC and each link is used for
> unidirectional traffic only. In terms of the interconnect API, both read
> and write paths have the same direction.

I'm not sure I understand what you mean by this last sentence. Are you
saying that each path in terms of the interconnect API is a always a
bidirectional link?

> Is the EMC really an interconnect provider or is it just a slave port? Can
> we scale both EMC and MC independently?

The EMC is the only one where we can scale the frequency, but the MC has
various knobs that can be used to fine-tune arbitration, set maximum
latency, etc.

I vaguely recall Dmitry mentioning that the EMC in early generations of
Tegra used to have controls for individual memory clients, but I don't
see that in more recent generations.

Thierry

> > Any ideas on how to resolve this? Let me know if the DT bindings and
> > example don't make things clear enough.
> > 
> > Thierry
> > 
> >  .../firmware/nvidia,tegra186-bpmp.yaml        | 59 +++++++++++++++++++
> >  1 file changed, 59 insertions(+)
> > 
> > diff --git a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > index dabf1c1aec2f..d40fcd836e90 100644
> > --- a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > +++ b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.yaml
> > @@ -43,6 +43,24 @@ properties:
> >        - enum:
> >            - nvidia,tegra186-bpmp
> >  
> > +  interconnects:
> > +    $ref: /schemas/types.yaml#/definitions/phandle-array
> > +    description: A list of phandle and specifier pairs that describe the
> > +      interconnect paths to and from the BPMP.
> > +
> > +  interconnect-names:
> > +    $ref: /schemas/types.yaml#/definitions/non-unique-string-array
> > +    description: One string for each pair of phandle and specifier in the
> > +      "interconnects" property.
> > +    # XXX We need at least one of these to be named dma-mem so that the core
> > +    # will set the DMA mask based on the DMA parent, but all of these go to
> > +    # system memory eventually.
> > +    items:
> > +      - const: dma-mem
> > +      - const: dma-mem
> > +      - const: dma-mem
> > +      - const: dma-mem
> > +
> >    iommus:
> >      $ref: /schemas/types.yaml#/definitions/phandle-array
> >      description: |
> > @@ -152,8 +170,43 @@ additionalProperties: false
> >  
> >  examples:
> >    - |
> > +    #include <dt-bindings/clock/tegra186-clock.h>
> >      #include <dt-bindings/interrupt-controller/arm-gic.h>
> >      #include <dt-bindings/mailbox/tegra186-hsp.h>
> > +    #include <dt-bindings/memory/tegra186-mc.h>
> > +
> > +    mc: memory-controller@2c00000 {
> > +        compatible = "nvidia,tegra186-mc";
> > +        reg = <0x02c00000 0xb0000>;
> > +        interrupts = <GIC_SPI 223 IRQ_TYPE_LEVEL_HIGH>;
> > +        status = "disabled";
> > +
> > +        #interconnect-cells = <1>;
> > +        #address-cells = <2>;
> > +        #size-cells = <2>;
> > +
> > +        ranges = <0x02c00000 0x0 0x02c00000 0x0 0xb0000>;
> > +
> > +        /*
> > +         * Memory clients have access to all 40 bits that the memory
> > +         * controller can address.
> > +         */
> > +        dma-ranges = <0x0 0x0 0x0 0x100 0x0>;
> > +
> > +        #memory-controller-cells = <0>;
> > +
> > +        emc: external-memory-controller@2c60000 {
> > +            compatible = "nvidia,tegra186-emc";
> > +            reg = <0x0 0x02c60000 0x0 0x50000>;
> > +            interrupts = <GIC_SPI 224 IRQ_TYPE_LEVEL_HIGH>;
> > +            clocks = <&bpmp TEGRA186_CLK_EMC>;
> > +            clock-names = "emc";
> > +
> > +            #interconnect-cells = <0>;
> > +
> > +            nvidia,bpmp = <&bpmp>;
> > +        };
> > +    };
> >  
> >      hsp_top0: hsp@3c00000 {
> >          compatible = "nvidia,tegra186-hsp";
> > @@ -187,6 +240,12 @@ examples:
> >  
> >      bpmp {
> >          compatible = "nvidia,tegra186-bpmp";
> > +        interconnects = <&emc &mc TEGRA186_MEMORY_CLIENT_BPMPR>,
> > +                        <&mc TEGRA186_MEMORY_CLIENT_BPMPW &emc>,
> > +                        <&emc &mc TEGRA186_MEMORY_CLIENT_BPMPDMAR>,
> > +                        <&mc TEGRA186_MEMORY_CLIENT_BPMPDMAW &emc>;
> > +        interconnect-names = "dma-mem", "dma-mem", "dma-mem", "dma-mem";
> > +
> >          iommus = <&smmu TEGRA186_SID_BPMP>;
> >          mboxes = <&hsp_top0 TEGRA_HSP_MBOX_TYPE_DB TEGRA_HSP_DB_MASTER_BPMP>;
> >          shmem = <&cpu_bpmp_tx &cpu_bpmp_rx>;
> > 

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux