On Tue, Oct 18, 2022 at 11:11 AM Christian König <christian.koenig@xxxxxxx> wrote: > > Gentle ping to others to get this reviewed. > > Alex, this is fixing the TLB flush errors and I think we need to get it > into -fixes ASAP. > > Christian. > > Am 14.10.22 um 10:15 schrieb Christian König: > > Setting this flag on a scheduler fence prevents pipelining of jobs > > depending on this fence. In other words we always insert a full CPU > > round trip before dependen jobs are pushed to the pipeline. typo: dependen -> dependent > > > > Signed-off-by: Christian König <christian.koenig@xxxxxxx> > > CC: stable@xxxxxxxxxxxxxxx # 5.19+ Please add a link to the bug as well for both patches. With those fixed, series is: Reviewed-by: Alex Deucher <alexander.deucher@xxxxxxx> > > --- > > drivers/gpu/drm/scheduler/sched_entity.c | 3 ++- > > include/drm/gpu_scheduler.h | 9 +++++++++ > > 2 files changed, 11 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > > index 191c56064f19..43d337d8b153 100644 > > --- a/drivers/gpu/drm/scheduler/sched_entity.c > > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > > @@ -385,7 +385,8 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) > > } > > > > s_fence = to_drm_sched_fence(fence); > > - if (s_fence && s_fence->sched == sched) { > > + if (s_fence && s_fence->sched == sched && > > + !test_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &fence->flags)) { > > > > /* > > * Fence is from the same scheduler, only need to wait for > > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > > index 0fca8f38bee4..f01d14b231ed 100644 > > --- a/include/drm/gpu_scheduler.h > > +++ b/include/drm/gpu_scheduler.h > > @@ -32,6 +32,15 @@ > > > > #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000) > > > > +/** > > + * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining > > + * > > + * Setting this flag on a scheduler fence prevents pipelining of jobs depending > > + * on this fence. In other words we always insert a full CPU round trip before > > + * dependen jobs are pushed to the hw queue. > > + */ > > +#define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS > > + > > struct drm_gem_object; > > > > struct drm_gpu_scheduler; >