On 20.12.2023 09:04, Dikshita Agarwal wrote: > > > On 12/18/2023 8:39 PM, Konrad Dybcio wrote: >> On 18.12.2023 12:32, Dikshita Agarwal wrote: >>> Add support for initializing Iris "resources", which are clocks, >>> interconnects, power domains, reset clocks, and clock frequencies >>> used for Iris hardware. >>> >>> Signed-off-by: Dikshita Agarwal <quic_dikshita@xxxxxxxxxxx> >>> --- [...] >>> + >>> + for (i = 0; i < core->clk_count; i++) { >>> + cinfo = &core->clock_tbl[i]; >>> + cinfo->name = plat_clk_table[i].name; >>> + cinfo->clk_id = plat_clk_table[i].clk_id; >>> + cinfo->has_scaling = plat_clk_table[i].has_scaling; >>> + cinfo->clk = devm_clk_get(core->dev, cinfo->name); >>> + if (IS_ERR(cinfo->clk)) { >>> + dev_err(core->dev, >>> + "%s: failed to get clock: %s\n", __func__, cinfo->name); >>> + return PTR_ERR(cinfo->clk); >>> + } >>> + } >> Are you not going to use OPP for scaling the main RPMhPD with the core >> clock? >> > We are using OPP for scaling the vcodec clk. > Could you please elaborate you query here, may be I didn't understand fully. It's just that this approach of scanning everything we know and expect about the clock seems a bit unnecessary.. Going with the approach I suggested below (i.e. separate struct clk for important ones like core or mem clock) simplify this to the point where you just set opp_set_rate on the imporant ones and you can throw clk_bulk_ operations at the ones that simply need to be en/disabled. Konrad [...] >>> + >>> +struct power_domain_info { >>> + struct device *genpd_dev; >>> + const char *name; >>> +}; >>> + >>> +struct clock_info { >>> + struct clk *clk; >>> + const char *name; >> I'm not sure why you need it >> >>> + u32 clk_id; >> Or this >> >>> + bool has_scaling; >> Or this >> >> you could probably do something like this: >> >> struct big_iris_struct { >> [...] >> struct clk *core_clk; >> struct clk *memory_clk; >> struct clk *some_important_scaled_clock; >> struct clk_bulk_data less_important_nonscaling_clocks[X] >> } >> >> and then make use of all of the great common upstream APIs to manage >> them! >> > Will explore this and get back.