On Fri, 23 Aug 2024 14:51:06 +0200 Krzysztof Kozlowski <krzysztof.kozlowski@xxxxxxxxxx> wrote: > Simplify error handling (smaller error handling) over locks with > guard(). > > Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@xxxxxxxxxx> Musing inline. LGTM Reviewed-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> > --- > drivers/pmdomain/rockchip/pm-domains.c | 5 +---- > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/drivers/pmdomain/rockchip/pm-domains.c b/drivers/pmdomain/rockchip/pm-domains.c > index 5679ad336a11..538dde58d924 100644 > --- a/drivers/pmdomain/rockchip/pm-domains.c > +++ b/drivers/pmdomain/rockchip/pm-domains.c > @@ -910,7 +910,7 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev) > * Prevent any rockchip_pmu_block() from racing with the remainder of > * setup (clocks, register initialization). > */ > - mutex_lock(&dmc_pmu_mutex); > + guard(mutex)(&dmc_pmu_mutex); > > for_each_available_child_of_node_scoped(np, node) { > error = rockchip_pm_add_one_domain(pmu, node); > @@ -943,13 +943,10 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev) > if (!WARN_ON_ONCE(dmc_pmu)) > dmc_pmu = pmu; > > - mutex_unlock(&dmc_pmu_mutex); > - > return 0; > > err_out: > rockchip_pm_domain_cleanup(pmu); I wonder. Could you use a devm_add_action_or_reset for this and allow early returns throughout? Would need to take the lock again perhaps and I haven't checked if there is any issue in dropping and retaking the mutex however. The block logic is non obvious so I couldn't quickly figure this out. > - mutex_unlock(&dmc_pmu_mutex); > return error; > } > >