Hi, On Wed, Nov 2, 2022 at 10:07 AM Stephen Boyd <swboyd@xxxxxxxxxxxx> wrote: > > We shouldn't be calling runtime PM APIs from within the genpd > enable/disable path for a couple reasons. > > First, this causes an AA lockdep splat because genpd can call into genpd > code again while holding the genpd lock. > > WARNING: possible recursive locking detected > 5.19.0-rc2-lockdep+ #7 Not tainted > -------------------------------------------- > kworker/2:1/49 is trying to acquire lock: > ffffffeea0370788 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30 > > but task is already holding lock: > ffffffeea03710a8 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30 > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(&genpd->mlock); > lock(&genpd->mlock); > > *** DEADLOCK *** > > May be due to missing lock nesting notation > > 3 locks held by kworker/2:1/49: > #0: 74ffff80811a5748 ((wq_completion)pm){+.+.}-{0:0}, at: process_one_work+0x320/0x5fc > #1: ffffffc008537cf8 ((work_completion)(&genpd->power_off_work)){+.+.}-{0:0}, at: process_one_work+0x354/0x5fc > #2: ffffffeea03710a8 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30 > > stack backtrace: > CPU: 2 PID: 49 Comm: kworker/2:1 Not tainted 5.19.0-rc2-lockdep+ #7 > Hardware name: Google Lazor (rev3 - 8) with KB Backlight (DT) > Workqueue: pm genpd_power_off_work_fn > Call trace: > dump_backtrace+0x1a0/0x200 > show_stack+0x24/0x30 > dump_stack_lvl+0x7c/0xa0 > dump_stack+0x18/0x44 > __lock_acquire+0xb38/0x3634 > lock_acquire+0x180/0x2d4 > __mutex_lock_common+0x118/0xe30 > mutex_lock_nested+0x70/0x7c > genpd_lock_mtx+0x24/0x30 > genpd_runtime_suspend+0x2f0/0x414 > __rpm_callback+0xdc/0x1b8 > rpm_callback+0x4c/0xcc > rpm_suspend+0x21c/0x5f0 > rpm_idle+0x17c/0x1e0 > __pm_runtime_idle+0x78/0xcc > gdsc_disable+0x24c/0x26c > _genpd_power_off+0xd4/0x1c4 > genpd_power_off+0x2d8/0x41c > genpd_power_off_work_fn+0x60/0x94 > process_one_work+0x398/0x5fc > worker_thread+0x42c/0x6c4 > kthread+0x194/0x1b4 > ret_from_fork+0x10/0x20 > > Second, this confuses runtime PM on CoachZ for the camera devices by > causing the camera clock controller's runtime PM usage_count to go > negative after resuming from suspend. This is because runtime PM is > being used on the clock controller while runtime PM is disabled for the > device. > > The reason for the negative count is because a GDSC is represented as a > genpd and each genpd that is attached to a device is resumed during the > noirq phase of system wide suspend/resume (see the noirq suspend ops > assignment in pm_genpd_init() for more details). The camera GDSCs are > attached to camera devices with the 'power-domains' property in DT. > Every device has runtime PM disabled in the late system suspend phase > via __device_suspend_late(). Runtime PM is not usable until runtime PM > is enabled in device_resume_early(). The noirq phases run after the > 'late' and before the 'early' phase of suspend/resume. When the genpds > are resumed in genpd_resume_noirq(), we call down into gdsc_enable() > that calls pm_runtime_resume_and_get() and that returns -EACCES to > indicate failure to resume because runtime PM is disabled for all > devices. > > Upon closer inspection, calling runtime PM APIs like this in the GDSC > driver doesn't make sense. It was intended to make sure the GDSC for the > clock controller providing other GDSCs was enabled, specifically the > MMCX GDSC for the display clk controller on SM8250 (sm8250-dispcc), so > that GDSC register accesses succeeded. That will already happen because > we make the 'dev->pm_domain' a parent domain of each GDSC we register in > gdsc_register() via pm_genpd_add_subdomain(). When any of these GDSCs > are accessed, we'll enable the parent domain (in this specific case > MMCX). > > We also remove any getting of runtime PM during registration, because > when a genpd is registered it increments the count on the parent if the > genpd itself is already enabled. And finally, the runtime PM state of > the clk controller registering the GDSC shouldn't matter to the > subdomain setup. Therefore we always assign 'dev' unconditionally so > when GDSCs are removed we properly unlink the GDSC from the clk > controller's pm_domain. > > Cc: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx> > Cc: Johan Hovold <johan+linaro@xxxxxxxxxx> > Cc: Ulf Hansson <ulf.hansson@xxxxxxxxxx> > Cc: Taniya Das <quic_tdas@xxxxxxxxxxx> > Cc: Satya Priya <quic_c_skakit@xxxxxxxxxxx> > Cc: Douglas Anderson <dianders@xxxxxxxxxxxx> > Cc: Matthias Kaehlcke <mka@xxxxxxxxxxxx> > Reported-by: Stephen Boyd <swboyd@xxxxxxxxxxxx> > Fixes: 1b771839de05 ("clk: qcom: gdsc: enable optional power domain support") > Signed-off-by: Stephen Boyd <swboyd@xxxxxxxxxxxx> > --- > > Changes from v1 (https://lore.kernel.org/r/20221101233421.997149-1-swboyd@xxxxxxxxxxxx): > * Fix ret thinko > * Update kerneldoc on 'dev' member > > drivers/clk/qcom/gdsc.c | 62 +++++------------------------------------ > drivers/clk/qcom/gdsc.h | 2 +- > 2 files changed, 8 insertions(+), 56 deletions(-) Though I'm not an expert, this seems reasonable to me and certainly an improvement in that it no longer calls pm_runtime calls during the "noirq" resume phase. Thus a non-expert: Reviewed-by: Douglas Anderson <dianders@xxxxxxxxxxxx> This also seems to avoid the problems I reported [1]. Even though the root cause (pm_runtime vs. clk prepare lock deadlock) is still theoretically possible at least the problem is no longer tickled for me (I tested on a sc7280-herobrine-villager board). Thus, I'm OK with: Tested-by: Douglas Anderson <dianders@xxxxxxxxxxxx> [1] https://lore.kernel.org/r/20220922154354.2486595-1-dianders@xxxxxxxxxxxx