On 3/23/20 10:22 AM, Yintian Tao wrote:
> There is one one corner case at
dma_fence_signal_locked
> which will raise the NULL pointer problem just like
below.
> ->dma_fence_signal
> ->dma_fence_signal_locked
> ->test_and_set_bit
> here trigger dma_fence_release happen due to the
zero of fence refcount.
Did you find out why the zero refcount on the finished
fence happens
before the fence was signaled ? The finished fence is
created with
refcount set to 1 in
drm_sched_fence_create->dma_fence_init and then the
refcount is decremented in
drm_sched_main->amdgpu_job_free_cb->drm_sched_job_cleanup. This
should
only happen after fence is already signaled (see
drm_sched_get_cleanup_job). On top of that the finished
fence is
referenced from other places (e.g.
entity->last_scheduled e.t.c)...
>
> ->dma_fence_put
> ->dma_fence_release
> ->drm_sched_fence_release_scheduled
> ->call_rcu
> here make the union fled “cb_list” at finished
fence
> to NULL because struct rcu_head contains two
pointer
> which is same as struct list_head cb_list
>
> Therefore, to hold the reference of finished fence
at drm_sched_process_job
> to prevent the null pointer during finished fence
dma_fence_signal
>
> [ 732.912867] BUG: kernel NULL pointer
dereference, address: 0000000000000008
> [ 732.914815] #PF: supervisor write access in
kernel mode
> [ 732.915731] #PF: error_code(0x0002) -
not-present page
> [ 732.916621] PGD 0 P4D 0
> [ 732.917072] Oops: 0002 [#1] SMP PTI
> [ 732.917682] CPU: 7 PID: 0 Comm: swapper/7
Tainted: G OE 5.4.0-rc7 #1
> [ 732.918980] Hardware name: QEMU Standard PC
(i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by
qemu-project.org 04/01/2014
> [ 732.920906] RIP:
0010:dma_fence_signal_locked+0x3e/0x100
> [ 732.938569] Call Trace:
> [ 732.939003] <IRQ>
> [ 732.939364] dma_fence_signal+0x29/0x50
> [ 732.940036] drm_sched_fence_finished+0x12/0x20
[gpu_sched]
> [ 732.940996] drm_sched_process_job+0x34/0xa0
[gpu_sched]
> [ 732.941910] dma_fence_signal_locked+0x85/0x100
> [ 732.942692] dma_fence_signal+0x29/0x50
> [ 732.943457] amdgpu_fence_process+0x99/0x120
[amdgpu]
> [ 732.944393]
sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
>
> v2: hold the finished fence at
drm_sched_process_job instead of
> amdgpu_fence_process
> v3: resume the blank line
>
> Signed-off-by: Yintian Tao
<yttao@xxxxxxx>
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
> index a18eabf692e4..8e731ed0d9d9 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -651,7 +651,9 @@ static void
drm_sched_process_job(struct dma_fence *f, struct
dma_fence_cb *cb)
>
> trace_drm_sched_process_job(s_fence);
>
> + dma_fence_get(&s_fence->finished);
> drm_sched_fence_finished(s_fence);
If the fence was already released during call to
drm_sched_fence_finished->dma_fence_signal->...
why is it safe to
reference the s_fence just before that call ? Can't it
already be
released by this time ?
Andrey
> + dma_fence_put(&s_fence->finished);
>
wake_up_interruptible(&sched->wake_up_worker);
> }
>
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url="">