[AMD Official Use Only - Internal Distribution Only] Maybe we can use the acpi_target_system_state() interface to identify the system-wide suspend target level Sx and then can parse the return code by the following macro definition. If have bandwidth will update and refine the AMDGPU Sx[0..5] suspend/resume sequence. #define ACPI_STATE_S0 (u8) 0 #define ACPI_STATE_S1 (u8) 1 #define ACPI_STATE_S2 (u8) 2 #define ACPI_STATE_S3 (u8) 3 #define ACPI_STATE_S4 (u8) 4 #define ACPI_STATE_S5 (u8) 5 Thanks, Prike > -----Original Message----- > From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of > Bhardwaj, Rajneesh > Sent: Wednesday, March 10, 2021 1:25 AM > To: Alex Deucher <alexdeucher@xxxxxxxxx>; Lazar, Lijo > <Lijo.Lazar@xxxxxxx> > Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; amd- > gfx@xxxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH 4/7] drm/amdgpu: track what pmops flow we are in > > pm_message_t events seem to be the right thing to use here instead of > driver's privately managed power states. Please have a look > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir > .bootlin.com%2Flinux%2Fv4.7%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fi915 > %2Fi915_drv.c%23L714&data=04%7C01%7CPrike.Liang%40amd.com%7 > C473ede68e7a347ff606b08d8e3204e94%7C3dd8961fe4884e608e11a82d994e > 183d%7C0%7C0%7C637509075233985095%7CUnknown%7CTWFpbGZsb3d8e > yJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D > %7C1000&sdata=Y%2BNKrW2LfzB157fZ%2FuLn7QAu%2BmxVgHjzG8LO > CH8z6J4%3D&reserved=0 > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir > .bootlin.com%2Flinux%2Fv4.7%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ > sysfs.c%23L43&data=04%7C01%7CPrike.Liang%40amd.com%7C473ede6 > 8e7a347ff606b08d8e3204e94%7C3dd8961fe4884e608e11a82d994e183d%7C > 0%7C0%7C637509075233985095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiM > C4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000 > &sdata=svJsR97DiTwcbYHn3Y9dDV0YQCVzx5HLiebqQ9mTRY8%3D&am > p;reserved=0 > > Thanks, > > Rajneesh > > > On 3/9/2021 10:47 AM, Alex Deucher wrote: > > [CAUTION: External Email] > > > > On Tue, Mar 9, 2021 at 1:19 AM Lazar, Lijo <Lijo.Lazar@xxxxxxx> wrote: > >> [AMD Public Use] > >> > >> This seems a duplicate of dev_pm_info states. Can't we reuse that? > > Are you referring to the PM_EVENT_ messages in > > dev_pm_info.power_state? Maybe. I was not able to find much > > documentation on how those should be used. Do you know? > > > > Alex > > > > > >> Thanks, > >> Lijo > >> > >> -----Original Message----- > >> From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of > >> Alex Deucher > >> Sent: Tuesday, March 9, 2021 9:40 AM > >> To: amd-gfx@xxxxxxxxxxxxxxxxxxxxx > >> Cc: Deucher, Alexander <Alexander.Deucher@xxxxxxx> > >> Subject: [PATCH 4/7] drm/amdgpu: track what pmops flow we are in > >> > >> We reuse the same suspend and resume functions for all of the pmops > states, so flag what state we are in so that we can alter behavior deeper in > the driver depending on the current flow. > >> > >> Signed-off-by: Alex Deucher <alexander.deucher@xxxxxxx> > >> --- > >> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 20 +++++++- > >> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 58 > +++++++++++++++++++---- > >> drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 3 +- > >> 3 files changed, 70 insertions(+), 11 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > >> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > >> index d47626ce9bc5..4ddc5cc983c7 100644 > >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h > >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h > >> @@ -347,6 +347,24 @@ int amdgpu_device_ip_block_add(struct > >> amdgpu_device *adev, bool amdgpu_get_bios(struct amdgpu_device > >> *adev); bool amdgpu_read_bios(struct amdgpu_device *adev); > >> > >> +/* > >> + * PM Ops > >> + */ > >> +enum amdgpu_pmops_state { > >> + AMDGPU_PMOPS_NONE = 0, > >> + AMDGPU_PMOPS_PREPARE, > >> + AMDGPU_PMOPS_COMPLETE, > >> + AMDGPU_PMOPS_SUSPEND, > >> + AMDGPU_PMOPS_RESUME, > >> + AMDGPU_PMOPS_FREEZE, > >> + AMDGPU_PMOPS_THAW, > >> + AMDGPU_PMOPS_POWEROFF, > >> + AMDGPU_PMOPS_RESTORE, > >> + AMDGPU_PMOPS_RUNTIME_SUSPEND, > >> + AMDGPU_PMOPS_RUNTIME_RESUME, > >> + AMDGPU_PMOPS_RUNTIME_IDLE, > >> +}; > >> + > >> /* > >> * Clocks > >> */ > >> @@ -1019,8 +1037,8 @@ struct amdgpu_device { > >> u8 reset_magic[AMDGPU_RESET_MAGIC_NUM]; > >> > >> /* s3/s4 mask */ > >> + enum amdgpu_pmops_state pmops_state; > >> bool in_suspend; > >> - bool in_hibernate; > >> > >> /* > >> * The combination flag in_poweroff_reboot_com used to > >> identify the poweroff diff --git > >> a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c > >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c > >> index 3e6bb7d79652..0312c52bd39d 100644 > >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c > >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c > >> @@ -1297,34 +1297,54 @@ amdgpu_pci_shutdown(struct pci_dev *pdev) > static int amdgpu_pmops_prepare(struct device *dev) { > >> struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + int r; > >> > >> + adev->pmops_state = AMDGPU_PMOPS_PREPARE; > >> /* Return a positive number here so > >> * DPM_FLAG_SMART_SUSPEND works properly > >> */ > >> if (amdgpu_device_supports_boco(drm_dev)) > >> - return pm_runtime_suspended(dev) && > >> + r= pm_runtime_suspended(dev) && > >> pm_suspend_via_firmware(); > >> - > >> - return 0; > >> + else > >> + r = 0; > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + return r; > >> } > >> > >> static void amdgpu_pmops_complete(struct device *dev) { > >> + struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + > >> + adev->pmops_state = AMDGPU_PMOPS_COMPLETE; > >> /* nothing to do */ > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> } > >> > >> static int amdgpu_pmops_suspend(struct device *dev) { > >> struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + int r; > >> > >> - return amdgpu_device_suspend(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_SUSPEND; > >> + r = amdgpu_device_suspend(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + return r; > >> } > >> > >> static int amdgpu_pmops_resume(struct device *dev) { > >> struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + int r; > >> > >> - return amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_RESUME; > >> + r = amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + return r; > >> } > >> > >> static int amdgpu_pmops_freeze(struct device *dev) @@ -1333,9 > +1353,9 @@ static int amdgpu_pmops_freeze(struct device *dev) > >> struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> int r; > >> > >> - adev->in_hibernate = true; > >> + adev->pmops_state = AMDGPU_PMOPS_FREEZE; > >> r = amdgpu_device_suspend(drm_dev, true); > >> - adev->in_hibernate = false; > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> if (r) > >> return r; > >> return amdgpu_asic_reset(adev); @@ -1344,8 +1364,13 @@ > >> static int amdgpu_pmops_freeze(struct device *dev) static int > amdgpu_pmops_thaw(struct device *dev) { > >> struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + int r; > >> > >> - return amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_THAW; > >> + r = amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + return r; > >> } > >> > >> static int amdgpu_pmops_poweroff(struct device *dev) @@ -1354,17 > +1379,24 @@ static int amdgpu_pmops_poweroff(struct device *dev) > >> struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> int r; > >> > >> + adev->pmops_state = AMDGPU_PMOPS_POWEROFF; > >> adev->in_poweroff_reboot_com = true; > >> r = amdgpu_device_suspend(drm_dev, true); > >> adev->in_poweroff_reboot_com = false; > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> return r; > >> } > >> > >> static int amdgpu_pmops_restore(struct device *dev) { > >> struct drm_device *drm_dev = dev_get_drvdata(dev); > >> + struct amdgpu_device *adev = drm_to_adev(drm_dev); > >> + int r; > >> > >> - return amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_RESTORE; > >> + r = amdgpu_device_resume(drm_dev, true); > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + return r; > >> } > >> > >> static int amdgpu_pmops_runtime_suspend(struct device *dev) @@ - > 1389,6 +1421,7 @@ static int amdgpu_pmops_runtime_suspend(struct > device *dev) > >> } > >> } > >> > >> + adev->pmops_state = AMDGPU_PMOPS_RUNTIME_SUSPEND; > >> adev->in_runpm = true; > >> if (amdgpu_device_supports_px(drm_dev)) > >> drm_dev->switch_power_state = > DRM_SWITCH_POWER_CHANGING; @@ -1396,6 +1429,7 @@ static int > amdgpu_pmops_runtime_suspend(struct device *dev) > >> ret = amdgpu_device_suspend(drm_dev, false); > >> if (ret) { > >> adev->in_runpm = false; > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> return ret; > >> } > >> > >> @@ -1412,6 +1446,8 @@ static int > amdgpu_pmops_runtime_suspend(struct device *dev) > >> amdgpu_device_baco_enter(drm_dev); > >> } > >> > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> + > >> return 0; > >> } > >> > >> @@ -1425,6 +1461,7 @@ static int > amdgpu_pmops_runtime_resume(struct device *dev) > >> if (!adev->runpm) > >> return -EINVAL; > >> > >> + adev->pmops_state = AMDGPU_PMOPS_RUNTIME_RESUME; > >> if (amdgpu_device_supports_px(drm_dev)) { > >> drm_dev->switch_power_state = > >> DRM_SWITCH_POWER_CHANGING; > >> > >> @@ -1449,6 +1486,7 @@ static int > amdgpu_pmops_runtime_resume(struct device *dev) > >> if (amdgpu_device_supports_px(drm_dev)) > >> drm_dev->switch_power_state = DRM_SWITCH_POWER_ON; > >> adev->in_runpm = false; > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> return 0; > >> } > >> > >> @@ -1464,6 +1502,7 @@ static int amdgpu_pmops_runtime_idle(struct > device *dev) > >> return -EBUSY; > >> } > >> > >> + adev->pmops_state = AMDGPU_PMOPS_RUNTIME_IDLE; > >> if (amdgpu_device_has_dc_support(adev)) { > >> struct drm_crtc *crtc; > >> > >> @@ -1504,6 +1543,7 @@ static int amdgpu_pmops_runtime_idle(struct > >> device *dev) > >> > >> pm_runtime_mark_last_busy(dev); > >> pm_runtime_autosuspend(dev); > >> + adev->pmops_state = AMDGPU_PMOPS_NONE; > >> return ret; > >> } > >> > >> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c > >> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c > >> index 502e1b926a06..05a15f858a06 100644 > >> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c > >> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c > >> @@ -1327,7 +1327,8 @@ static int smu_disable_dpms(struct > smu_context *smu) > >> bool use_baco = !smu->is_apu && > >> ((amdgpu_in_reset(adev) && > >> (amdgpu_asic_reset_method(adev) == > AMD_RESET_METHOD_BACO)) || > >> - ((adev->in_runpm || adev->in_hibernate) && > amdgpu_asic_supports_baco(adev))); > >> + ((adev->in_runpm || (adev->pmops_state == > AMDGPU_PMOPS_FREEZE)) > >> + && amdgpu_asic_supports_baco(adev))); > >> > >> /* > >> * For custom pptable uploading, skip the DPM features > >> -- > >> 2.29.2 > >> > >> _______________________________________________ > >> amd-gfx mailing list > >> amd-gfx@xxxxxxxxxxxxxxxxxxxxx > >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flis > >> ts.freedesktop.org%2Fmailman%2Flistinfo%2Famd- > gfx&data=04%7C01%7C > >> > Prike.Liang%40amd.com%7C473ede68e7a347ff606b08d8e3204e94%7C3dd89 > 61fe4 > >> > 884e608e11a82d994e183d%7C0%7C0%7C637509075233985095%7CUnknow > n%7CTWFpb > >> > GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6 > Mn > >> > 0%3D%7C1000&sdata=yYtPSR7eqLfZzYn1N%2FCDvpp%2Fxr6lERvs8w7d > uAiaX9g > >> %3D&reserved=0 > _______________________________________________ > >> amd-gfx mailing list > >> amd-gfx@xxxxxxxxxxxxxxxxxxxxx > >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flis > >> ts.freedesktop.org%2Fmailman%2Flistinfo%2Famd- > gfx&data=04%7C01%7C > >> > Prike.Liang%40amd.com%7C473ede68e7a347ff606b08d8e3204e94%7C3dd89 > 61fe4 > >> > 884e608e11a82d994e183d%7C0%7C0%7C637509075233995092%7CUnknow > n%7CTWFpb > >> > GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6 > Mn > >> > 0%3D%7C1000&sdata=Kl90CvT0F7esbrGUTi1bGhA9Le7H7KZ3umrBcAAb > Y6o%3D& > >> amp;reserved=0 > > _______________________________________________ > > amd-gfx mailing list > > amd-gfx@xxxxxxxxxxxxxxxxxxxxx > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist > > s.freedesktop.org%2Fmailman%2Flistinfo%2Famd- > gfx&data=04%7C01%7CPr > > > ike.Liang%40amd.com%7C473ede68e7a347ff606b08d8e3204e94%7C3dd896 > 1fe4884 > > > e608e11a82d994e183d%7C0%7C0%7C637509075233995092%7CUnknown%7 > CTWFpbGZsb > > > 3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0 > %3D% > > > 7C1000&sdata=Kl90CvT0F7esbrGUTi1bGhA9Le7H7KZ3umrBcAAbY6o%3 > D&re > > served=0 > _______________________________________________ > amd-gfx mailing list > amd-gfx@xxxxxxxxxxxxxxxxxxxxx > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.f > reedesktop.org%2Fmailman%2Flistinfo%2Famd- > gfx&data=04%7C01%7CPrike.Liang%40amd.com%7C473ede68e7a347ff6 > 06b08d8e3204e94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C > 637509075233995092%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM > DAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdat > a=Kl90CvT0F7esbrGUTi1bGhA9Le7H7KZ3umrBcAAbY6o%3D&reserved= > 0 _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx