RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[AMD Official Use Only - Internal Distribution Only]


Yes, it’s SPM_VMID. There is VMFault with 0 if KMD doesn’t update SPM_VMID with current job since the default value of SPM_VMID is 0.

 

Thanks

Jacob

 

From: Koenig, Christian
Sent: Tuesday, February 25, 2020 6:56 PM
To: He, Jacob; Zhou, David(ChunMing); Deucher, Alexander; Christian König; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

Hi Jacob,

well first of all we can't rely on the correct behavior of Vulkan. In the kernel you have to always assume that userspace is not doing the right thing.


If there is any other UMD applications, such as OCL, and OCL driver doesn’t update the SPM_VMID before enable SPM, there is VMFault anyway even Vulkan driver restore the SPM_VMID to 0.


Question is where with which VMID is this VM fault triggered? The SPM VMID?

If yes than that would be quite problematic if an application could trigger a VM fault for VMID 0.

Regards,
Christian.


Am 25.02.20 um 08:20 schrieb He, Jacob:

[AMD Official Use Only - Internal Distribution Only]

 

Vulkan UMD makes sure that the SPM_VMID will be updated if the application enables the SPM. So there is no this kind of problem for Vulkan application. It’s kind of “atomic” operation to enable SPM and update SPM_VMID. Vulkan application even is not aware of SPM_VMID.

If there is any other UMD applications, such as OCL, and OCL driver doesn’t update the SPM_VMID before enable SPM, there is VMFault anyway even Vulkan driver restore the SPM_VMID to 0.

 

Thanks

Jacob

 

From: Koenig, Christian
Sent: Tuesday, February 25, 2020 2:51 AM
To: He, Jacob; Zhou, David(ChunMing); Deucher, Alexander; Christian König; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

Leave the old spm_vmid there is not a problem because other new spm enabled job will update it before it’s running and other spm disabled job will not access spm_vmid register.

Do you have a high level description of how SPM works? For example would it be possible to cause a problem in another application if we leave the SPM VMID here and the VMID is reused?

Keep in mind that we need to guarantee process isolation, e.g. we can't allow anything bad to happen to another application if somebody is stupid enough to turn on SPM tracing without setting it up properly.

Regards,
Christian.

Am 24.02.20 um 09:06 schrieb He, Jacob:

Ok, I’m glad that there is no uapi change needed.

I checked the git log, and reserve_vmid was added for shader debugger, not gpa. I’m fine to re-use it since the spm will not enabled for shader debug in general. I’ll try to change my patch.

BTW, “ring->funcs->setup_spm(ring, NULL)” to clear the vmid is not gonna work since the job with spm enabled execution is not done yet. Leave the old spm_vmid there is not a problem because other new spm enabled job will update it before it’s running and other spm disabled job will not access spm_vmid register.

 

Thanks

Jacob

 

From: Zhou, David(ChunMing)
Sent: Saturday, February 22, 2020 12:45 AM
To: Koenig, Christian; Deucher, Alexander; Christian König; He, Jacob; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

[AMD Official Use Only - Internal Distribution Only]

 

That’s fine to me.

 

-David

 

From: Koenig, Christian <Christian.Koenig@xxxxxxx>
Sent: Friday, February 21, 2020 11:33 PM
To: Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Christian König <ckoenig.leichtzumerken@xxxxxxxxx>; Zhou, David(ChunMing) <David1.Zhou@xxxxxxx>; He, Jacob <Jacob.He@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

I would just do this as part of the vm_flush() callback on the ring.

E.g. check if the VMID you want to flush is reserved and if yes enable SPM.

Maybe pass along a flag or something in the job to make things easier.

Christian.

Am 21.02.20 um 16:31 schrieb Deucher, Alexander:

[AMD Public Use]

 

We already have the RESERVE_VMID ioctl interface, can't we just use that internally in the kernel to update the rlc register via the ring when we schedule the relevant IB?  E.g., add a new ring callback to set SPM state and then set it to the reserved vmid before we schedule the ib, and then reset it to 0 after the IB in amdgpu_ib_schedule().

 

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c

index 4b2342d11520..e0db9362c6ee 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c

@@ -185,6 +185,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,

        if (ring->funcs->insert_start)

                ring->funcs->insert_start(ring);

 

+       if (ring->funcs->setup_spm)

+               ring->funcs->setup_spm(ring, job);

+

        if (job) {

                r = amdgpu_vm_flush(ring, job, need_pipe_sync);

                if (r) {

@@ -273,6 +276,9 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,

                return r;

        }

 

+       if (ring->funcs->setup_spm)

+               ring->funcs->setup_spm(ring, NULL);

+

        if (ring->funcs->insert_end)

                ring->funcs->insert_end(ring);

 

 

 

Alex

From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> on behalf of Christian König <ckoenig.leichtzumerken@xxxxxxxxx>
Sent: Friday, February 21, 2020 5:28 AM
To: Zhou, David(ChunMing) <David1.Zhou@xxxxxxx>; He, Jacob <Jacob.He@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

That would probably be a no-go, but we could enhance the kernel driver to update the RLC_SPM_VMID register with the reserved VMID.

Handling that in userspace is most likely not working anyway, since the RLC registers are usually not accessible by userspace.

Regards,
Christian.

Am 20.02.20 um 16:15 schrieb Zhou, David(ChunMing):

[AMD Official Use Only - Internal Distribution Only]

 

You can enhance amdgpu_vm_ioctl In amdgpu_vm.c to return vmid to userspace.

 

-David

 

 

From: He, Jacob <Jacob.He@xxxxxxx>
Sent: Thursday, February 20, 2020 10:46 PM
To: Zhou, David(ChunMing) <David1.Zhou@xxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

amdgpu_vm_reserve_vmid doesn’t return the reserved vmid back to user space. There is no chance for user mode driver to update RLC_SPM_VMID.

 

Thanks

Jacob

 

From: He, Jacob
Sent: Thursday, February 20, 2020 6:20 PM
To: Zhou, David(ChunMing); Koenig, Christian; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

Looks like amdgpu_vm_reserve_vmid could work, let me have a try to update the RLC_SPM_VMID with pm4 packets in UMD.

 

Thanks

Jacob

 

From: Zhou, David(ChunMing)
Sent: Thursday, February 20, 2020 10:13 AM
To: Koenig, Christian; He, Jacob; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: RE: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

 

[AMD Official Use Only - Internal Distribution Only]

Christian is right here, that will cause many problems for simply using VMID in kernel.
We already have an pair interface for RGP, I think you can use it instead of involving additional kernel change.
amdgpu_vm_reserve_vmid/ amdgpu_vm_unreserve_vmid.

-David

-----Original Message-----
From: amd-gfx <amd-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Christian König
Sent: Wednesday, February 19, 2020 7:03 PM
To: He, Jacob <Jacob.He@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [PATCH] drm/amdgpu: Add a chunk ID for spm trace

Am 19.02.20 um 11:15 schrieb Jacob He:
> [WHY]
> When SPM trace enabled, SPM_VMID should be updated with the current
> vmid.
>
> [HOW]
> Add a chunk id, AMDGPU_CHUNK_ID_SPM_TRACE, so that UMD can tell us
> which job should update SPM_VMID.
> Right before a job is submitted to GPU, set the SPM_VMID accordingly.
>
> [Limitation]
> Running more than one SPM trace enabled processes simultaneously is
> not supported.

Well there are multiple problems with that patch.

First of all you need to better describe what SPM tracing is in the commit message.

Then the updating of mmRLC_SPM_MC_CNTL must be executed asynchronously on the ring. Otherwise we might corrupt an already executing SPM trace.

And you also need to make sure to disable the tracing again or otherwise we run into a bunch of trouble when the VMID is reused.

You also need to make sure that IBs using the SPM trace are serialized with each other, e.g. hack into amdgpu_ids.c file and make sure that only one VMID at a time can have that attribute.

Regards,
Christian.

>
> Change-Id: Ic932ef6ac9dbf244f03aaee90550e8ff3a675666
> Signed-off-by: Jacob He <jacob.he@xxxxxxx>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  |  7 +++++++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c  | 10 +++++++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h |  1 +
>   drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c  | 15 ++++++++++++++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c   |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c   |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c   | 15 ++++++++++++++-
>   8 files changed, 48 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index f9fa6e104fef..3f32c4db5232 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -113,6 +113,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
>        uint32_t uf_offset = 0;
>        int i;
>        int ret;
> +     bool update_spm_vmid = false;
>  
>        if (cs->in.num_chunks == 0)
>                return 0;
> @@ -221,6 +222,10 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
>                case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL:
>                        break;
>  
> +             case AMDGPU_CHUNK_ID_SPM_TRACE:
> +                     update_spm_vmid = true;
> +                     break;
> +
>                default:
>                        ret = -EINVAL;
>                        goto free_partial_kdata;
> @@ -231,6 +236,8 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
>        if (ret)
>                goto free_all_kdata;
>  
> +     p->job->need_update_spm_vmid = update_spm_vmid;
> +
>        if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) {
>                ret = -ECANCELED;
>                goto free_all_kdata;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index cae81914c821..36faab12b585 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -156,9 +156,13 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
>                return -EINVAL;
>        }
>  
> -     if (vm && !job->vmid) {
> -             dev_err(adev->dev, "VM IB without ID\n");
> -             return -EINVAL;
> +     if (vm) {
> +             if (!job->vmid) {
> +                     dev_err(adev->dev, "VM IB without ID\n");
> +                     return -EINVAL;
> +             } else if (adev->gfx.rlc.funcs->update_spm_vmid && job->need_update_spm_vmid) {
> +                     adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);
> +             }
>        }
>  
>        alloc_size = ring->funcs->emit_frame_size + num_ibs * diff --git
> a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index 2e2110dddb76..4582536961c7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -52,6 +52,7 @@ struct amdgpu_job {
>        bool                    vm_needs_flush;
>        uint64_t                vm_pd_addr;
>        unsigned                vmid;
> +     bool                    need_update_spm_vmid;
>        unsigned                pasid;
>        uint32_t                gds_base, gds_size;
>        uint32_t                gws_base, gws_size;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h
> index d3d4707f2168..52509c254cbd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.h
> @@ -126,6 +126,7 @@ struct amdgpu_rlc_funcs {
>        void (*stop)(struct amdgpu_device *adev);
>        void (*reset)(struct amdgpu_device *adev);
>        void (*start)(struct amdgpu_device *adev);
> +     void (*update_spm_vmid)(struct amdgpu_device *adev, unsigned vmid);
>   };
>  
>   struct amdgpu_rlc {
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index 5e9fb0976c6c..91eb788d6229 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -4214,6 +4214,18 @@ static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
>        return 0;
>   }
>  
> +static void gfx_v10_0_update_spm_vmid(struct amdgpu_device *adev,
> +unsigned vmid) {
> +     u32 data;
> +
> +     data = "" 0, mmRLC_SPM_MC_CNTL);
> +
> +     data &= ~RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK;
> +     data |= (vmid & RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK) <<
> +RLC_SPM_MC_CNTL__RLC_SPM_VMID__SHIFT;
> +
> +     WREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL, data); }
> +
>   static const struct amdgpu_rlc_funcs gfx_v10_0_rlc_funcs = {
>        .is_rlc_enabled = gfx_v10_0_is_rlc_enabled,
>        .set_safe_mode = gfx_v10_0_set_safe_mode, @@ -4224,7 +4236,8 @@
> static const struct amdgpu_rlc_funcs gfx_v10_0_rlc_funcs = {
>        .resume = gfx_v10_0_rlc_resume,
>        .stop = gfx_v10_0_rlc_stop,
>        .reset = gfx_v10_0_rlc_reset,
> -     .start = gfx_v10_0_rlc_start
> +     .start = gfx_v10_0_rlc_start,
> +     .update_spm_vmid = gfx_v10_0_update_spm_vmid
>   };
>  
>   static int gfx_v10_0_set_powergating_state(void *handle, diff --git
> a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> index 8f20a5dd44fe..b24fc55cf13a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
> @@ -4221,7 +4221,8 @@ static const struct amdgpu_rlc_funcs gfx_v7_0_rlc_funcs = {
>        .resume = gfx_v7_0_rlc_resume,
>        .stop = gfx_v7_0_rlc_stop,
>        .reset = gfx_v7_0_rlc_reset,
> -     .start = gfx_v7_0_rlc_start
> +     .start = gfx_v7_0_rlc_start,
> +     .update_spm_vmid = NULL
>   };
>  
>   static int gfx_v7_0_early_init(void *handle) diff --git
> a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> index fa245973de12..66640d2b6b37 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> @@ -5600,7 +5600,8 @@ static const struct amdgpu_rlc_funcs iceland_rlc_funcs = {
>        .resume = gfx_v8_0_rlc_resume,
>        .stop = gfx_v8_0_rlc_stop,
>        .reset = gfx_v8_0_rlc_reset,
> -     .start = gfx_v8_0_rlc_start
> +     .start = gfx_v8_0_rlc_start,
> +     .update_spm_vmid = NULL
>   };
>  
>   static void gfx_v8_0_update_medium_grain_clock_gating(struct
> amdgpu_device *adev, diff --git
> a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index 9b7ff783e9a5..df872f949f68 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -4704,6 +4704,18 @@ static int gfx_v9_0_update_gfx_clock_gating(struct amdgpu_device *adev,
>        return 0;
>   }
>  
> +static void gfx_v9_0_update_spm_vmid(struct amdgpu_device *adev,
> +unsigned vmid) {
> +     u32 data;
> +
> +     data = "" 0, mmRLC_SPM_MC_CNTL);
> +
> +     data &= ~RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK;
> +     data |= (vmid & RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK) <<
> +RLC_SPM_MC_CNTL__RLC_SPM_VMID__SHIFT;
> +
> +     WREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL, data); }
> +
>   static const struct amdgpu_rlc_funcs gfx_v9_0_rlc_funcs = {
>        .is_rlc_enabled = gfx_v9_0_is_rlc_enabled,
>        .set_safe_mode = gfx_v9_0_set_safe_mode, @@ -4715,7 +4727,8 @@
> static const struct amdgpu_rlc_funcs gfx_v9_0_rlc_funcs = {
>        .resume = gfx_v9_0_rlc_resume,
>        .stop = gfx_v9_0_rlc_stop,
>        .reset = gfx_v9_0_rlc_reset,
> -     .start = gfx_v9_0_rlc_start
> +     .start = gfx_v9_0_rlc_start,
> +     .update_spm_vmid = gfx_v9_0_update_spm_vmid
>   };
>  
>   static int gfx_v9_0_set_powergating_state(void *handle,

_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url="">

 

 

 

_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

 

 

 

 

 

 

 

_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux