On Mon, Apr 30, 2018 at 05:37:49PM -0700, rishabhb@xxxxxxxxxxxxxx wrote: > On 2018-04-30 07:33, Rob Herring wrote: > > On Fri, Apr 27, 2018 at 5:57 PM, <rishabhb@xxxxxxxxxxxxxx> wrote: > > > On 2018-04-27 07:21, Rob Herring wrote: > > > > > > > > On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote: > > > > > > > > > > Documentation for last level cache controller device tree bindings, > > > > > client bindings usage examples. > > > > > > > > > > Signed-off-by: Channagoud Kadabi <ckadabi@xxxxxxxxxxxxxx> > > > > > Signed-off-by: Rishabh Bhatnagar <rishabhb@xxxxxxxxxxxxxx> > > > > > --- > > > > > .../devicetree/bindings/arm/msm/qcom,llcc.txt | 60 > > > > > ++++++++++++++++++++++ > > > > > 1 file changed, 60 insertions(+) > > > > > create mode 100644 > > > > > Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt > > > > > > > > > > > > My comments on v4 still apply. > > > > > > > > Rob > > > > > > > > > Hi Rob, > > > Reposting our replies to your comments on v4: > > > > > > This is partially true, a bunch of SoCs would support this design but > > > clients IDs are not expected to change. So Ideally client drivers > > > could > > > hard code these IDs. > > > > > > However I have other concerns of moving the client Ids in the driver. > > > The way the APIs implemented today are as follows: > > > #1. Client calls into system cache driver to get cache slice handle > > > with the usecase Id as input. > > > #2. System cache driver gets the phandle of system cache instance from > > > the client device to obtain the private data. > > > #3. Based on the usecase Id perform look up in the private data to get > > > cache slice handle. > > > #4. Return the cache slice handle to client > > > > > > If we don't have the connection between client & system cache then the > > > private data needs to declared as static global in the system cache > > > driver, > > > that limits us to have just once instance of system cache block. > > > > How many instances do you have? > > > > It is easier to put the data into the kernel and move it to DT later > > than vice-versa. I don't think it is a good idea to do a custom > > binding here and one that only addresses caches and nothing else in > > the interconnect. So either we define an extensible and future-proof > > binding or put the data into the kernel for now. > > > > Rob > Hi rob, > Currently we have only instance but how do you propose we handle multiple > instances in future? Worry about that when you have more that one. If it's only a theoretical possibility then it can wait. > Currently we do a lookup in the private data of the driver to get the slice > handle but, if we were to remove the client connection we will have to make > lookup table as global and we can't have more than one instance. > Also, can you suggest any extensible interconnect binding that we can refer > to? There's been some work to add interconnect support for QCom chips. ATM, there is no binding for it and it is just a kernel driver and subsystem. I'm sure you can Google that to find as easily as me. Rob -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html