Hi Rob, On 8/10/20 11:52 AM, Suman Anna wrote: > Hi Rob, > > On 7/27/20 5:39 PM, Suman Anna wrote: >> Hi Rob, >> >> On 7/16/20 2:43 PM, Stefano Stabellini wrote: >>> On Thu, 16 Jul 2020, Mathieu Poirier wrote: >>>> Hi Rob, >>>> >>>> On Tue, Jul 14, 2020 at 11:15:53AM -0600, Rob Herring wrote: >>>>> On Mon, Jun 29, 2020 at 09:49:19PM -0500, Suman Anna wrote: >>>>>> The Texas Instruments K3 family of SoCs have one or more dual-core >>>>>> Arm Cortex R5F processor subsystems/clusters (R5FSS). The clusters >>>>>> can be split between multiple voltage domains as well. Add the device >>>>>> tree bindings document for these R5F subsystem devices. These R5F >>>>>> processors do not have an MMU, and so require fixed memory carveout >>>>>> regions matching the firmware image addresses. The nodes require more >>>>>> than one memory region, with the first memory region used for DMA >>>>>> allocations at runtime. The remaining memory regions are reserved >>>>>> and are used for the loading and running of the R5F remote processors. >>>>>> The R5F processors can also optionally use any internal on-chip SRAM >>>>>> memories either for executing code or using it as fast-access data. >>>>>> >>>>>> The added example illustrates the DT nodes for the single R5FSS device >>>>>> present on K3 AM65x family of SoCs. >>>>>> >>>>>> Signed-off-by: Suman Anna <s-anna@xxxxxx> >>>>>> --- >>>>>> v2: >>>>>> - Renamed "lockstep-mode" property to "ti,cluster-mode" >>>>> >>>>> I don't think that's a move in the right direction given this is at >>>>> least partially a standard feature. >>>>> >>>>> As I said before, I'm very hesistant to accept anything here given I >>>>> know the desires and activity to define 'system Devicetrees' of which >>>>> TI is participating. While maybe an rproc node is sufficient for a >>>>> DSP, it seems multiple vendors have R cores and want to define them in >>>>> system DT. >> >> Ping on this discussion. TI is participating on the System DT evolution in general, but we don't have any plans to use DTS on our remote cores. We have our own auto-generated Chip-Support-Library (CSL) code that gets used on our firmwares. >> >> Also, most of the properties I defined are rather standard properties. I have posted a revised v3 [1] after the common ti,sci properties refactoring. This series is only waiting on the bindings. I am happy to change any ti, prefixed properties. I had one open question [2] that I am waiting for a response from you for identifying the R5F Core. > > Ping on this. Any comments on this? This discussion is what's holding up this series from getting merged. Also, FWIW, I spent a bit of time looking at the R5s (called RPU) in the Xilinx ZynqMP, and the integration aspects are very different between the TI and Xilinx SoCs, so I do not think even a single binding is possible between the two SoCs. A few of them to cite: 1. TI SoCs require the power/resets to be released for both the Cores in LockStep-mode, while it was enough to just release the Core0 resets on ZynqMP. So, our binding requires that both CPUs be defined for sure as the reset controls are defined per core, while you don't see them on the RPU. 2. There are specific core reset sequences on TI SoCs in LockStep and Split-modes on TI SoCs, I am not sure if there are any with Xilinx SoCs. 3. The TCMs are embedded within the R5F sub-system on TI SoCs, and are controlled by the same power and clock as the R5Fs. There is an additional CPU halt line that controls the core execution, and allows us to enable the access to these. The ZynqMP looks to have completely independent control to the TCMs. This is the reason why they are represented as individual mmio-sram nodes in the Xilinx binding. 4. The TCMs and which one appears at the R5 address 0 are programmable on TI SoCs, I couldn't tell if this is the case with Xilinx SoCs. Ben and Stefano, Please do clarify, if I am off on any of the above differences. regards Suman > > regards > Suman > >> >> regards >> Suman >> >> [1] https://patchwork.kernel.org/patch/11679331/ >> [2] https://patchwork.kernel.org/comment/23273441/ >> >>>>> >>>>> Though the system DT effort has not yet given any thought to what is the >>>>> view of one processor or instance to another instance (which is what >>>>> this binding is). We'll still need something defined for that, but I'd >>>>> expect that to be dependent on what is defined for system DT. >>>> >>>> Efforts related to the definition of the system DT are under way, something I >>>> expect to keep going on for some time to come. I agree with the need to use the >>>> system DT to define remote processors and I look forward to the time we can do >>>> so. >>> >>> I'll take this opportunity to add that I should be able to publicly >>> present a System Device Tree proposal for this during the next call (the >>> next one after the call early next week that has already a full agenda.) >>> >>> >>>> That being said we need to find a concensus on how to move forward with patches >>>> that are ready to be merged. What is your opinion on that? >>> >>> In my opinion we don't have to necessarily wait for System Device Tree >>> to make progress with those if they look OK. >>> >> >