Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > Avoid blocking the kworker by putting back the freed object list if we > cannot immediately take the mutex. We will try again shortly, and flush > the work when desperate. > > References: https://bugs.freedesktop.org/show_bug.cgi?id=100434 > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > --- > drivers/gpu/drm/i915/i915_gem.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > index ab77e38ec264..c2e5cb529b0f 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -4200,7 +4200,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, > { > struct drm_i915_gem_object *obj, *on; > > - mutex_lock(&i915->drm.struct_mutex); > + if (!mutex_trylock(&i915->drm.struct_mutex)) { > + /* If we fail to acquire the struct_mutex, put back the > + * freed list and we will try again in the future. By > + * rescheduling the task we prevent us from blocking > + * the worker indefinitely on a prolonged wait for > + * struct_mutex. I don't understand the last part of the comment. If we don't want a prolonged block due to mutex, should we limit the amount of work we do here, inside the mutex. By limiting how much we free per aquiring the lock? -Mika > + */ > + if (llist_add_batch(llist_reverse_order(freed), freed, > + &i915->mm.free_list)) > + schedule_work(&i915->mm.free_work); > + return; > + } > + > intel_runtime_pm_get(i915); > llist_for_each_entry(obj, freed, freed) { > struct i915_vma *vma, *vn; > -- > 2.11.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx