Quoting Tvrtko Ursulin (2019-03-08 06:46:52) > > On 07/03/2019 22:24, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2019-03-07 17:06:58) > >> > >> On 07/03/2019 13:29, Chris Wilson wrote: > >>> Quoting Tvrtko Ursulin (2019-03-07 13:07:18) > >>>> > >>>> On 06/03/2019 14:24, Chris Wilson wrote: > >>>>> +static bool switch_to_kernel_context_sync(struct drm_i915_private *i915) > >>>>> +{ > >>>>> + if (i915_gem_switch_to_kernel_context(i915)) > >>>>> + return false; > >>>> > >>>> Is it worth still trying to idle if this fails? Since the timeout is > >>>> short, maybe reset in idle state bring less havoc than not. It can only > >>>> fail on memory allocations I think, okay and terminally wedged. In which > >>>> case it is still okay. > >>> > >>> Terminally wedged is hard wired to return 0 (this, the next patch?) so > >>> that we don't bail during i915_gem_suspend() for this reason. > >>> > >>> We do still idle if this fails, as we mark the driver/GPU as wedged. > >>> Perform a GPU reset so that it hopefully isn't shooting kittens any > >>> more, and pull a pillow over our heads. > >> > >> I didn't find a path which idles before wedging if > >> switch_to_kernel_context_sync fails due failing > >> i915_gem_switch_to_kernel_context. Where is it? > > > > Wedging implies idling. When we are wedged, the GPU is reset and left > > pointing into the void (effectively idle, GT powersaving should be > > unaffected by the wedge, don't ask how that works on ilk). All the > > callers do > > > > if (!switch_to_kernel_context_sync()) > > i915_gem_set_wedged() > > > > with more or less intermediate steps. Hmm, given that is true why not > > pull it into switch_to_kernel_context_sync()... > > If all callers follow up with a wedge maybe, yes. The problem is we don't reach that point until a couple more patches. Sign. > > > >> It is a minor concern don't get me wrong. It is unlikely to fail like > >> this. I was simply thinking why not try and wait for the current work to > >> finish before suspending in this case. Might be a better experience > >> after resume. > > > > For the desktop use case, it's immaterial as the hotplug, reconfigure > > and redraw take care of that. (Fbcon is also cleared.) > > But a matter of whether context state is sane or corrupt I think comes > into play. A short wait for idle before suspend might still work if the > extremely unlikely fail in i915_gem_switch_to_kernel_context happens. But then their context may be corrupt because of the suspend. The state being still in the GPU as we save the pages. Speaking of which since we have the default context state now, we should restore the kernel contexts across resume. > Seems more robust to me to try regardless since the timeout is short. We don't trust suspend not to lose updates to the resident context. And wedging itself isn't aware that it may be damaging a pinned but inactive context, as reset itself is ignorant of that (we hope the requests that HW save itself before reset take effect!) Accept the compromise of bool success = true; if (switch_to_kernel_context() < 0) success = false; if (wait_for_idle(I915_GEM_IDLE_TIMEOUT) < 0) success = false; return success; if (!success) i915_gem_set_wedged(). So anyone who was able to save themselves before the ship sank, does. > >>>>> static void > >>>>> i915_gem_idle_work_handler(struct work_struct *work) > >>>>> { > >>>>> - struct drm_i915_private *dev_priv = > >>>>> - container_of(work, typeof(*dev_priv), gt.idle_work.work); > >>>>> + struct drm_i915_private *i915 = > >>>>> + container_of(work, typeof(*i915), gt.idle_work.work); > >>>>> + typeof(i915->gt) *gt = &i915->gt; > >>>> > >>>> I am really not sure about the typeof idiom in normal C code. :( It > >>>> saves a little bit of typing, and a little bit of churn if type name > >>>> changes, but just feels weird to use it somewhere and somewhere not. > >>> > >>> But then we have to name thing! We're not sold on gt; it means quite a > >>> few different things around the bspec. This bit is actually part that > >>> I'm earmarking for i915_gem itself (high level idle/power/user management) > >>> and I'm contemplating i915_gpu for the bits that beneath us but still a > >>> management layer over hardware (and intel_foo for the bits that talk to > >>> hardware. Maybe that too will change if we completely split out into > >>> different modules.) > >> > >> So you could have left it as is for now and have a smaller diff. But > >> okay.. have it if you insist. > > > > No, you've stated on a few occasions that you don't like gt->X so I'll > > have to find a new strategy and fixup patches as I remember your > > distaste. > > I know I don't like sprinkling of typeof to declare locals, but don't > remember if I disliked something more. Not sure what your "no" refers to > now. That you feel you had to do this in this patch, or that you don't > accept mine "have it if you insist"? I think you have reasonable objection to using typeof(i915->gt) / auto locals and will rework patches to not introduce them. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx