On 4/16/19 12:00 PM, Koenig, Christian wrote: > Am 16.04.19 um 17:42 schrieb Grodzovsky, Andrey: >> On 4/16/19 10:58 AM, Grodzovsky, Andrey wrote: >>> On 4/16/19 10:43 AM, Koenig, Christian wrote: >>>> Am 16.04.19 um 16:36 schrieb Grodzovsky, Andrey: >>>>> On 4/16/19 5:47 AM, Christian König wrote: >>>>>> Am 15.04.19 um 23:17 schrieb Eric Anholt: >>>>>>> Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx> writes: >>>>>>> >>>>>>>> From: Christian König <christian.koenig@xxxxxxx> >>>>>>>> >>>>>>>> We now destroy finished jobs from the worker thread to make sure that >>>>>>>> we never destroy a job currently in timeout processing. >>>>>>>> By this we avoid holding lock around ring mirror list in drm_sched_stop >>>>>>>> which should solve a deadlock reported by a user. >>>>>>>> >>>>>>>> v2: Remove unused variable. >>>>>>>> >>>>>>>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109692 >>>>>>>> >>>>>>>> Signed-off-by: Christian König <christian.koenig@xxxxxxx> >>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@xxxxxxx> >>>>>>>> --- >>>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 ++-- >>>>>>>> drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 - >>>>>>>> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 9 +- >>>>>>>> drivers/gpu/drm/scheduler/sched_main.c | 138 >>>>>>>> +++++++++++++++++------------ >>>>>>>> drivers/gpu/drm/v3d/v3d_sched.c | 9 +- >>>>>>> Missing corresponding panfrost and lima updates. You should probably >>>>>>> pull in drm-misc for hacking on the scheduler. >>>>>>> >>>>>>>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c >>>>>>>> b/drivers/gpu/drm/v3d/v3d_sched.c >>>>>>>> index ce7c737b..8efb091 100644 >>>>>>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c >>>>>>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c >>>>>>>> @@ -232,11 +232,18 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, >>>>>>>> struct drm_sched_job *sched_job) >>>>>>>> /* block scheduler */ >>>>>>>> for (q = 0; q < V3D_MAX_QUEUES; q++) >>>>>>>> - drm_sched_stop(&v3d->queue[q].sched); >>>>>>>> + drm_sched_stop(&v3d->queue[q].sched, sched_job); >>>>>>>> if(sched_job) >>>>>>>> drm_sched_increase_karma(sched_job); >>>>>>>> + /* >>>>>>>> + * Guilty job did complete and hence needs to be manually removed >>>>>>>> + * See drm_sched_stop doc. >>>>>>>> + */ >>>>>>>> + if (list_empty(&sched_job->node)) >>>>>>>> + sched_job->sched->ops->free_job(sched_job); >>>>>>> If the if (sched_job) is necessary up above, then this should clearly be >>>>>>> under it. >>>>>>> >>>>>>> But, can we please have a core scheduler thing we call here instead of >>>>>>> drivers all replicating it? >>>>>> Yeah that's also something I noted before. >>>>>> >>>>>> Essential problem is that we remove finished jobs from the mirror list >>>>>> and so need to destruct them because we otherwise leak them. >>>>>> >>>>>> Alternative approach here would be to keep the jobs on the ring mirror >>>>>> list, but not submit them again. >>>>>> >>>>>> Regards, >>>>>> Christian. >>>>> I really prefer to avoid this, it means adding extra flag to sched_job >>>>> to check in each iteration of the ring mirror list. >>>> Mhm, why actually? We just need to check if the scheduler fence is signaled. >>> OK, i see it's equivalent but this still en extra check for all the >>> iterations. >>> >>>>> What about changing >>>>> signature of drm_sched_backend_ops.timedout_job to return drm_sched_job* >>>>> instead of void, this way we can return the guilty job back from the >>>>> driver specific handler to the generic drm_sched_job_timedout and >>>>> release it there. >>>> Well the timeout handler already has the job, so returning it doesn't >>>> make much sense. >>>> >>>> The problem is rather that the timeout handler doesn't know if it should >>>> destroy the job or not. >>> But the driver specific handler does, and actually returning back either >>> the pointer to the job or null will give an indication of that. We can >>> even return bool. >>> >>> Andrey >> Thinking a bit more about this - the way this check is done now "if >> (list_empty(&sched_job->node)) then free the sched_job" actually makes >> it possible to just move this as is from driver specific callbacks into >> drm_sched_job_timeout without any other changes. > Oh, well that sounds like a good idea off hand. > > Need to see the final code, but at least the best idea so far. > > Christian. Unfortunately looks like it's not that good idea at the end, take a look at the attached KASAN print - sched thread's cleanup function races against TDR handler and removes the guilty job from mirror list and we have no way of differentiating if the job was removed from within the TDR handler or from the sched. thread's clean-up function. So looks like we either need 'keep the jobs on the ring mirror list, but not submit them again' as you suggested before or add a flag to sched_job to hint to drm_sched_job_timedout that guilty job requires manual removal. Your suggestion implies we will need an extra check in almost every place of traversal of the mirror ring to avoid handling signaled jobs while mine requires extra flag in sched_job struct . I feel that keeping completed jobs in the mirror list when they actually don't belong there any more is confusing and an opening for future bugs. Andrey > >> Andrey >> >>>> Christian. >>>> >>>>> Andrey >>>>> >>>>>>>> + >>>>>>>> /* get the GPU back into the init state */ >>>>>>>> v3d_reset(v3d); >>>> _______________________________________________ >>>> amd-gfx mailing list >>>> amd-gfx@xxxxxxxxxxxxxxxxxxxxx >>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx >>> _______________________________________________ >>> amd-gfx mailing list >>> amd-gfx@xxxxxxxxxxxxxxxxxxxxx >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
121.189757 < 0.000171>] amdgpu 0000:01:00.0: GPU reset(5) succeeded! passed[ 121.189894 < 0.000137>] ================================================================== [ 121.189951 < 0.000057>] BUG: KASAN: use-after-free in drm_sched_job_timedout+0x7a/0xf0 [gpu_sched] Run Summary: Type Total Ran Passed Failed Inactive suites 8 0 n/a 0 0 tests 39 1 1 0 0 asserts 8 8 8 0 n/a Elapsed time = 0.001 seconds[ 121.189956 < 0.000005>] Read of size 8 at addr ffff88840389a8b0 by task kworker/2:2/1140 [ 121.189969 < 0.000013>] CPU: 2 PID: 1140 Comm: kworker/2:2 Tainted: G OE 5.1.0-rc2-misc+ #1 [ 121.189972 < 0.000003>] Hardware name: System manufacturer System Product Name/Z170-PRO, BIOS 1902 06/27/2016 [ 121.189977 < 0.000005>] Workqueue: events drm_sched_job_timedout [gpu_sched] [ 121.189980 < 0.000003>] Call Trace: [ 121.189985 < 0.000005>] dump_stack+0x9b/0xf5 [ 121.189992 < 0.000007>] print_address_description+0x70/0x290 [ 121.189997 < 0.000005>] ? drm_sched_job_timedout+0x7a/0xf0 [gpu_sched] [ 121.190002 < 0.000005>] kasan_report+0x134/0x191 [ 121.190006 < 0.000004>] ? drm_sched_job_timedout+0x7a/0xf0 [gpu_sched] [ 121.190014 < 0.000008>] ? drm_sched_job_timedout+0x7a/0xf0 [gpu_sched] [ 121.190019 < 0.000005>] __asan_load8+0x54/0x90 [ 121.190024 < 0.000005>] drm_sched_job_timedout+0x7a/0xf0 [gpu_sched] [ 121.190034 < 0.000010>] process_one_work+0x466/0xb00 [ 121.190046 < 0.000012>] ? queue_work_node+0x180/0x180 [ 121.190061 < 0.000015>] worker_thread+0x83/0x6c0 [ 121.190075 < 0.000014>] kthread+0x1a9/0x1f0 [ 121.190079 < 0.000004>] ? rescuer_thread+0x760/0x760 [ 121.190081 < 0.000002>] ? kthread_cancel_delayed_work_sync+0x20/0x20 [ 121.190088 < 0.000007>] ret_from_fork+0x3a/0x50 [ 121.190105 < 0.000017>] Allocated by task 1421: [ 121.190110 < 0.000005>] save_stack+0x46/0xd0 [ 121.190112 < 0.000002>] __kasan_kmalloc+0xab/0xe0 [ 121.190115 < 0.000003>] kasan_kmalloc+0xf/0x20 [ 121.190117 < 0.000002>] __kmalloc+0x167/0x390 [ 121.190210 < 0.000093>] amdgpu_job_alloc+0x47/0x170 [amdgpu] [ 121.190289 < 0.000079>] amdgpu_cs_ioctl+0x9bd/0x2e70 [amdgpu] [ 121.190312 < 0.000023>] drm_ioctl_kernel+0x17e/0x1d0 [drm] [ 121.190334 < 0.000022>] drm_ioctl+0x5e1/0x640 [drm] [ 121.190409 < 0.000075>] amdgpu_drm_ioctl+0x78/0xd0 [amdgpu] [ 121.190413 < 0.000004>] do_vfs_ioctl+0x152/0xa30 [ 121.190415 < 0.000002>] ksys_ioctl+0x6d/0x80 [ 121.190418 < 0.000003>] __x64_sys_ioctl+0x43/0x50 [ 121.190425 < 0.000007>] do_syscall_64+0x7d/0x240 [ 121.190430 < 0.000005>] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 121.190440 < 0.000010>] Freed by task 1242: [ 121.190448 < 0.000008>] save_stack+0x46/0xd0 [ 121.190453 < 0.000005>] __kasan_slab_free+0x13c/0x1a0 [ 121.190458 < 0.000005>] kasan_slab_free+0xe/0x10 [ 121.190462 < 0.000004>] kfree+0xfa/0x2e0 [ 121.190584 < 0.000122>] amdgpu_job_free_cb+0x7f/0x90 [amdgpu] [ 121.190589 < 0.000005>] drm_sched_cleanup_jobs.part.10+0xcf/0x1a0 [gpu_sched] [ 121.190594 < 0.000005>] drm_sched_main+0x38a/0x430 [gpu_sched] [ 121.190596 < 0.000002>] kthread+0x1a9/0x1f0 [ 121.190599 < 0.000003>] ret_from_fork+0x3a/0x50
_______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx