Re: [PATCH 3/5] dt-bindings: memory: Add Tegra114 memory controller bindings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/12/2021 17:59, Thierry Reding wrote:
> From: Thierry Reding <treding@xxxxxxxxxx>
> 
> Document the bindings for the memory controller found on Tegra114 SoCs.
> 
> Signed-off-by: Thierry Reding <treding@xxxxxxxxxx>
> ---
>  .../nvidia,tegra114-mc.yaml                   | 85 +++++++++++++++++++
>  1 file changed, 85 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra114-mc.yaml
> 
> diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra114-mc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra114-mc.yaml
> new file mode 100644
> index 000000000000..2fa50eb3aadb
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra114-mc.yaml
> @@ -0,0 +1,85 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra114-mc.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: NVIDIA Tegra114 SoC Memory Controller
> +
> +maintainers:
> +  - Thierry Reding <thierry.reding@xxxxxxxxx>
> +  - Jon Hunter <jonathanh@xxxxxxxxxx>
> +
> +description: |
> +  The Tegra114 Memory Controller architecturally consists of the following parts:
> +
> +    Arbitration Domains, which can handle a single request or response per
> +    clock from a group of clients. Typically, a system has a single Arbitration
> +    Domain, but an implementation may divide the client space into multiple
> +    Arbitration Domains to increase the effective system bandwidth.
> +
> +    Protocol Arbiter, which manage a related pool of memory devices. A system
> +    may have a single Protocol Arbiter or multiple Protocol Arbiters.
> +
> +    Memory Crossbar, which routes request and responses between Arbitration
> +    Domains and Protocol Arbiters. In the simplest version of the system, the
> +    Memory Crossbar is just a pass through between a single Arbitration Domain
> +    and a single Protocol Arbiter.
> +
> +    Global Resources, which include things like configuration registers which
> +    are shared across the Memory Subsystem.
> +
> +  The Tegra114 Memory Controller handles memory requests from internal clients
> +  and arbitrates among them to allocate memory bandwidth for DDR3L and LPDDR2
> +  SDRAMs.
> +
> +properties:
> +  compatible:
> +    const: nvidia,tegra114-mc
> +
> +  reg:
> +    maxItems: 1
> +
> +  clocks:
> +    maxItems: 1
> +
> +  clock-names:
> +    items:
> +      - const: mc
> +
> +  interrupts:
> +    maxItems: 1
> +
> +  "#reset-cells":
> +    const: 1
> +
> +  "#iommu-cells":
> +    const: 1
> +
> +required:
> +  - compatible
> +  - reg
> +  - interrupts
> +  - clocks
> +  - clock-names
> +  - "#reset-cells"
> +  - "#iommu-cells"
> +
> +additionalProperties: false

The binding looks the same as Tegra210, Tegra 20 MC. What is the point
of having three separate binding documents which are exactly the same?


Best regards,
Krzysztof



[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux