On Mo, 2025-02-10 at 19:17 +0100, Michal Wilczynski wrote: > On 2/4/25 18:18, Philipp Zabel wrote: > > On Mo, 2025-02-03 at 19:15 +0100, Michal Wilczynski wrote: [...] > > > I think this is required because the MEM clock gate is somehow broken > > > and marked as 'reserved' in manual, so instead as a workaround, since we > > > can't reliably enable the 'mem' clock it's a good idea to reset the > > > whole CLKGEN of the GPU. > > > > If this is a workaround for broken gating of the "mem" clock, would it > > be possible (and reasonable) to make this a separate reset control that > > is handled by the clock driver? ... > > Thank you for the detailed feedback, Philipp. > > After further consideration, I believe keeping the current reset driver > implementation would be preferable to moving the CLKGEN reset handling > to the clock driver. While it's technically possible to implement this > in the clock driver, I have concerns about the added complexity: > > 1. We'd need to expose the CLKGEN reset separately in the reset driver I'd expect this to simplify the reset driver. > 2. The clock driver's dt-bindings would need modification to add an > optional resets property If it describes the hardware correctly, that should be fine. > 3. We'd need custom clk_ops for all three clock gates (including a dummy > 'mem' gate) > 4. Each clock gate's .enable operation would need to handle CLKGEN reset > deassertion I accept these arguments, as I have no good feeling for how much complexity this would actually add. In my mind it shouldn't be much: the GPU clocks could all share the same refcounted implementation. The first clock to get enabled would ungate both GPU_CORE and GPU_CFG_ACLK gates and deassert GPU_SW_CLKGEN_RST, all in one place. The remaining enable(s) would be no-ops. Would that work? Whether a separate "dummy" MEM clock for the DT bindings is added or not would not make a difference. > While the clock framework could theoretically handle this, there's no > clean way to express the requirement that the CLKGEN reset should only > be deasserted after all clocks in the group are enabled. We could > implement this explicitly, but it would make the code more complex and > harder to understand. Doing this in the clock driver would have the advantage of clk_enabled GPU clocks actually staying physically enabled, without the reset driver disabling them via GPU_SW_CLKGEN_RST from the outside. > The current solution in the reset driver is simpler and clearer - it > treats this as what it really is: a TH1520-specific reset sequence. Yes. What this also is: a workaround for a SoC specific defect in the clock tree. I think it belongs in the clock driver because of this. [...] > Regarding the delay between clock enable and reset deassert - for SoCs > like BPI-F3 with a single reset line, implementing this in the GPU > consumer driver makes perfect sense. However, for the T-HEAD SoC, moving > the delay there would actually complicate things since we need to manage > both the CLKGEN and GPU reset lines in a specific sequence. Having this > handled entirely within the reset driver keeps the implementation > cleaner. You could delay in both places, it's just a microsecond after all. Whether the workaround is implemented in the reset driver or in the clock driver, I wouldn't want the GPU driver to have to carry a special case for TH1520. > Does this reasoning align with your thoughts? I'm happy to explore the > clock driver approach further if you still see significant advantages to > that solution. I won't object to carry this in the reset driver if the clock implementation turns out to be unreasonably complex, but I currently don't expect that to be the case. Please give it a shot. regards Philipp