Re: [PATCH v5 1/3] power-domain: add power domain drivers for Rockchip platform

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[...]

>>>
>>> +
>>> +       list_for_each_entry(de, &pd->dev_list, node) {
>>> +               i += 1;
>>> +               pm_clk_resume(pd->dev);
>>
>> Do you really need to call pm_clk_resume() number of times that there
>> are devices in power domain? Did you want it to be
>>
>>                 pm_clk_resume(de->dev);
>>
>> by any chance?

I was just about to ask the similar question as Dmitry did. :-)

>
> You are right.I will modify in the next version.

Now, does that also mean you would like to assign the ->start|stop()
callbacks in the struct gpd_dev_ops to pm_clk_suspend|resume()? Or do
you intend to handle that from each driver instead?

>>>
>>> +       }
>>> +
>>> +       /* no clk, set power domain will fail */
>>> +       if (i == 0) {
>>> +               pr_err("%s: failed to on/off power domain!", __func__);
>>> +               spin_unlock_irq(&pd->dev_lock);
>>> +               return ret;
>>> +       }
>>
>> Instead of counting I'd do
>>
>>         if (list_empty(&pd->dev_list)) {
>>                 pr_waen("%s: no devices in power domain\n", __func__);
>>                 goto out;
>>         }
>>
>> in the beginning of the function.
>
> This is a good idea.
>
>>> +
>>> +       ret = rockchip_pmu_set_power_domain(pd, power_on);
>>> +
>>> +       list_for_each_entry(de, &pd->dev_list, node) {
>>> +               pm_clk_suspend(pd->dev);
>>
>> Same here?
>>
>>> +       }
>>> +
>>> +       spin_unlock_irq(&pd->dev_lock);
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static int rockchip_pd_power_on(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, true);
>>> +}
>>> +
>>> +static int rockchip_pd_power_off(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, false);
>>> +}
>>> +
>>> +void rockchip_pm_domain_attach_dev(struct device *dev)
>>> +{
>>> +       int ret;
>>> +       int i = 0;
>>> +       struct clk *clk;
>>> +       struct rockchip_domain *pd;
>>> +       struct rockchip_dev_entry *de;
>>> +
>>> +       pd = (struct rockchip_domain *)dev->pm_domain;
>>> +       ret = pm_clk_create(dev);
>>> +       if (ret) {
>>> +               dev_err(dev, "pm_clk_create failed %d\n", ret);
>>> +               return;
>>> +       };
>>
>> Stray semicolon.
>>>
>>> +
>>> +       while ((clk = of_clk_get(dev->of_node, i++)) && !IS_ERR(clk)) {
>>> +               ret = pm_clk_add_clk(dev, clk);
>>> +               if (ret) {
>>> +                       dev_err(dev, "pm_clk_add_clk failed %d\n", ret);
>>> +                       goto clk_err;
>>> +               };
>>> +       }
>>> +
>>> +       de = devm_kcalloc(pd->dev, 1,
>>> +                       sizeof(struct rockchip_dev_entry *), GFP_KERNEL);
>>
>> Why devm_calloc for a single element and not devm_kzalloc? Also, I am a
>> bit concerned about using devm_* API here. They are better reserved fir
>> driver's ->probe() paths whereas we are called from
>> dev_pm_domain_attach() which is more general API (yes, currently it is
>> used by buses probing code, but that might change in the future).

Using the devm_*API is supposed to work from here. I have kept this in
mind, while we added the new dev_pm_domain_attach|detach() API. The
buses also handles -EPROBE_DEFER.

Now, I just realized that while Geert added attach|detach_dev()
callbacks for the generic PM domain, those are both "void" callbacks.
It means the deferred probe error handling is broken for these
callbacks. We should convert the attach_dev() callback into an int, I
will cook a patch immediately.

>>
>> Also, where is OOM error handling?
>
> Ok,I will change the use  devm_kzalloc.
> Register to pm domain devices, the number is not a lot.

[...]

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux