Re: [PATCH v4 1/2] Documentation: Documentation for qcom, llcc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2018-04-18 11:11, Channa wrote:
On 2018-04-18 07:52, Rob Herring wrote:
On Tue, Apr 17, 2018 at 5:12 PM,  <rishabhb@xxxxxxxxxxxxxx> wrote:
On 2018-04-17 10:43, rishabhb@xxxxxxxxxxxxxx wrote:

On 2018-04-16 07:59, Rob Herring wrote:

On Tue, Apr 10, 2018 at 01:08:12PM -0700, Rishabh Bhatnagar wrote:

Documentation for last level cache controller device tree bindings,
client bindings usage examples.


"Documentation: Documentation ..."? That wastes a lot of the subject
line... The preferred prefix is "dt-bindings: ..."


Signed-off-by: Channagoud Kadabi <ckadabi@xxxxxxxxxxxxxx>
Signed-off-by: Rishabh Bhatnagar <rishabhb@xxxxxxxxxxxxxx>
---
 .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 58
++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644
Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt

diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
new file mode 100644
index 0000000..497cf0f
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
@@ -0,0 +1,58 @@
+== Introduction==
+
+LLCC (Last Level Cache Controller) provides last level of cache memory
in SOC,
+that can be shared by multiple clients. Clients here are different
cores in the
+SOC, the idea is to minimize the local caches at the clients and
migrate to
+common pool of memory
+
+Properties:
+- compatible:
+        Usage: required
+        Value type: <string>
+        Definition: must be "qcom,sdm845-llcc"
+
+- reg:
+        Usage: required
+        Value Type: <prop-encoded-array>
+ Definition: must be addresses and sizes of the LLCC registers


How many address ranges?

It consists of just one address range. I'll edit the definition to make
it more clear.

+
+- #cache-cells:


This is all written as it is a common binding, but it is not one.

You already have most of the configuration data for each client in the driver, I think I'd just put the client connection there too. Is there
any variation of this for a given SoC?

#cache-cells and max-slices won't change for a given SOC. So you want me
to hard-code in the driver itself?

I can use of_parse_phandle_with_fixed_args function and fix the number of args as 1 instead of keeping #cache-cells here in DT. Does that look fine?

No, I'm saying why even put cache-slices properties in DT to begin
with? You could just define client id's within the kernel and clients
can use those instead of getting the id from the DT.

The reason to add cache-slices here is to establish a connection between
client and system cache. For example if we have multiple instances of
system cache blocks and client wants to choose a system cache instance
based on the usecase then its easier to establish this connection using
device tree than hard coding in the driver.


I have a couple of hesitations with putting this into the DT. First, I
think a cache is just one aspect of describing the interconnect
between masters and memory (and there's been discussions on
interconnect bindings too) and any binding needs to consider all of
the aspects of the interconnect. Second, I'd expect this cache
architecture will change SoC to SoC and the binding here is pretty
closely tied to the current cache implementation (e.g. slices). If
there were a bunch of SoCs with the same design and just different
client IDs (like interrupt IDs), then I'd feel differently.

This is partially true, a bunch of SoCs would support this design but
clients IDs are not expected to change. So Ideally client drivers could
hard code these IDs.

However I have other concerns of moving the client Ids in the driver.
The way the APIs implemented today are as follows:
#1. Client calls into system cache driver to get cache slice handle
with the usecase Id as input.
#2. System cache driver gets the phandle of system cache instance from
the client device to obtain the private data.
#3. Based on the usecase Id perform look up in the private data to get
cache slice handle.
#4. Return the cache slice handle to client

If we don't have the connection between client & system cache then the
private data needs to declared as static global in the system cache driver,
that limits us to have just once instance of system cache block.



Rob

Hi Rob:

Can you please provide your opinion on the approach here?

Thanks,
Channa

--
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux