>>> If we are trying to debug a reproducible hang, probably best to just to disable gfxoff before
messing with it to remove that as a factor.
Agree
>> Otherwise, the method included in this patch is the proper way to disable/enable GFXOFF dynamically.
Sounds not doable, because we cannot disable GFXOFF every time we use debugfs (and restore GFXOFF as well after that debugfs interface done …)
thanks
发件人: Deucher, Alexander <Alexander.Deucher@xxxxxxx>
发送时间: 2020年2月21日
23:40
收件人: Christian König <ckoenig.leichtzumerken@xxxxxxxxx>; Huang, Ray <Ray.Huang@xxxxxxx>; Liu, Monk <Monk.Liu@xxxxxxx>
抄送: StDenis, Tom <Tom.StDenis@xxxxxxx>; Alex Deucher <alexdeucher@xxxxxxxxx>; amd-gfx list <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
主题: Re: 回复: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
[AMD Public Use]
If we are trying to debug a reproducible hang, probably best to just to disable gfxoff before messing with it to remove that as a factor. Otherwise, the method included
in this patch is the proper way to disable/enable GFXOFF dynamically.
Am 21.02.20 um 16:23 schrieb Huang Rui:
> On Fri, Feb 21, 2020 at 11:18:07PM +0800, Liu, Monk wrote:
>> Better not use KIQ, because when you use debugfs to read register you usually hit a hang, and by that case KIQ probably already die
> If CP is busy, the gfx should be in "on" state at that time, we needn't use KIQ.
Yeah, but how do you detect that? Do we have a way to wake up the CP
without asking power management to do so?
Cause the register debug interface is meant to be used when the ASIC is
completed locked up. Sending messages to the SMU is not really going to
work in that situation.
Regards,
Christian.
>
> Thanks,
> Ray
>
>> -----邮件原件-----
>> 发件人: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx>
代表 Huang Rui
>> 发送时间: 2020年2月21日 22:34
>> 收件人: StDenis, Tom <Tom.StDenis@xxxxxxx>
>> 抄送: Alex Deucher <alexdeucher@xxxxxxxxx>; amd-gfx list <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
>> 主题: Re: [PATCH] drm/amd/amdgpu: disable GFXOFF around debugfs access to MMIO
>>
>> On Wed, Feb 19, 2020 at 10:09:46AM -0500, Tom St Denis wrote:
>>> I got some messages after a while:
>>>
>>> [ 741.788564] Failed to send Message 8.
>>> [ 746.671509] Failed to send Message 8.
>>> [ 748.749673] Failed to send Message 2b.
>>> [ 759.245414] Failed to send Message 7.
>>> [ 763.216902] Failed to send Message 2a.
>>>
>>> Are there any additional locks that should be held? Because some
>>> commands like --top or --waves can do a lot of distinct read
>>> operations (causing a lot of enable/disable calls).
>>>
>>> I'm going to sit on this a bit since I don't think the patch is ready
>>> for pushing out.
>>>
>> How about use RREG32_KIQ and WREG32_KIQ?
>>
>> Thanks,
>> Ray
>>
>>> Tom
>>>
>>> On 2020-02-19 10:07 a.m., Alex Deucher wrote:
>>>> On Wed, Feb 19, 2020 at 10:04 AM Tom St Denis <tom.stdenis@xxxxxxx> wrote:
>>>>> Signed-off-by: Tom St Denis <tom.stdenis@xxxxxxx>
>>>> Please add a patch description. With that fixed:
>>>> Reviewed-by: Alex Deucher <alexander.deucher@xxxxxxx>
>>>>
>>>>> ---
>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 3 +++
>>>>> 1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>> index 7379910790c9..66f763300c96 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>> @@ -169,6 +169,7 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
>>>>> if (pm_pg_lock)
>>>>> mutex_lock(&adev->pm.mutex);
>>>>>
>>>>> + amdgpu_gfx_off_ctrl(adev, false);
>>>>> while (size) {
>>>>> uint32_t value;
>>>>>
>>>>> @@ -192,6 +193,8 @@ static int amdgpu_debugfs_process_reg_op(bool read, struct file *f,
>>>>> }
>>>>>
>>>>> end:
>>>>> + amdgpu_gfx_off_ctrl(adev, true);
>>>>> +
>>>>> if (use_bank) {
>>>>> amdgpu_gfx_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
>>>>> mutex_unlock(&adev->grbm_idx_mutex);
>>>>> --
>>>>> 2.24.1
>>>>>
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@xxxxxxxxxxxxxxxxxxxxx
>>>>>
https://nam11.safelinks.protection.outlook.com/?url="">
>>>>> lists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data="">
>>>>> C01%7Cmonk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8
>>>>> 961fe4884e608e11a82d994e183d%7C0%7C0%7C637178924605524378&sdat
>>>>> a=%2FyHkvYU5T%2F4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserve
>>>>> d=0
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@xxxxxxxxxxxxxxxxxxxxx
>>>
https://nam11.safelinks.protection.outlook.com/?url="">
>>> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data="">
>>> nk.liu%40amd.com%7Cba45efb26c0240ed036f08d7b6db20aa%7C3dd8961fe4884e60
>>> 8e11a82d994e183d%7C0%7C0%7C637178924605524378&sdata=%2FyHkvYU5T%2F
>>> 4iFxRexsg%2BIdm7sDzyXbjzNpHUGCO7h4k%3D&reserved=0
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@xxxxxxxxxxxxxxxxxxxxx
>>
https://nam11.safelinks.protection.outlook.com/?url="">
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@xxxxxxxxxxxxxxxxxxxxx
>
https://nam11.safelinks.protection.outlook.com/?url="">
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url="">
|