Quoting Amit Nischal (2018-05-04 03:45:12) > On 2018-05-02 12:53, Stephen Boyd wrote: > > Quoting Amit Nischal (2018-04-30 09:20:10) > >> + > >> +static struct clk_branch gcc_disp_gpll0_clk_src = { > >> + .halt_reg = 0x52004, > >> + .halt_check = BRANCH_HALT_DELAY, > > > > What about this one? It's not a phy so I'm confused again why we're > > unable to check the halt bit. To be clear(er), I don't see why we ever > > want to have HALT_DELAY used. Hopefully we can remove that flag. > > > > From what I recall, the flag is there for clks that don't toggle their > > status bit at all, but that we know take a few cycles to ungate the > > upstream clk. So we threw a delay into the code to make sure that when > > clk_enable() returned, a driver wouldn't try to use hardware before the > > clk was actually on. But these cases should pretty much never happen, > > hence all the pushback against this flag. > > > > For these "*gpll0_clk_src" and "*gpll0_div_clk" clocks, there is no halt > bit to check the status and it is required to have delay for few cycles > so that clock gets turned on before a client driver to use the hardware. Ok.. but then why is there a 'halt_reg' configured for the clk? > >> + > >> +static struct clk_branch gcc_ufs_card_rx_symbol_0_clk = { > >> + .halt_reg = 0x75018, > >> + .halt_check = BRANCH_HALT_DELAY, > > > > There are still HALT_DELAY flags for UFS though? Why? > > For ufs_card tx/rx symbol clocks, we don't poll the status bit as > per the recommendation from the HW team. We can change the halt_check > type to newly implemented flag "BRANCH_HALT_SKIP". Please update us with > your thoughts to change the flag to "BRANCH_HALT_SKIP". Yes use HALT_SKIP please. > > > > > Also, are you going to send DFS support for the QUP clks? I would like > > to see that code merged soon. > > Taniya has sent the patches for DFS support for QUP clocks. > https://patchwork.kernel.org/patch/10376951/ > I'll take a look. -- To unsubscribe from this list: send the line "unsubscribe linux-soc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html