Re: [RFC PATCH] drm/vc4: Add a load tracker to prevent HVS underflow errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 16, 2018 at 3:10 PM Boris Brezillon
<boris.brezillon@xxxxxxxxxxx> wrote:
>
> Hi Daniel,
>
> On Tue, 16 Oct 2018 14:57:43 +0200
> Daniel Vetter <daniel@xxxxxxxx> wrote:
>
> > On Tue, Oct 16, 2018 at 11:40:45AM +0200, Boris Brezillon wrote:
> > > The HVS block is supposed to fill the pixelvalve FIFOs fast enough to
> > > meet the requested framerate. The problem is, the HVS and memory bus
> > > bandwidths are limited, and if we don't take these limitations into
> > > account we might end up with HVS underflow errors.
> > >
> > > This patch is trying to model the per-plane HVS and memory bus bandwidth
> > > consumption and take a decision at atomic_check() time whether the
> > > estimated load will fit in the HVS and membus budget.
> > >
> > > Note that we take an extra margin on the memory bus consumption to let
> > > the system run smoothly when other blocks are doing heavy use of the
> > > memory bus. Same goes for the HVS limit, except the margin is smaller in
> > > this case, since the HVS is not used by external components.
> > >
> > > Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxx>
> > > ---
> > > This logic has been validated using a simple shell script and
> > > some instrumentation in the VC4 driver:
> > >
> > > - capture underflow errors at the HVS level and expose a debugfs file
> > >   reporting those errors
> > > - add debugfs files to expose when atomic_check fails because of the
> > >   HVS or membus load limitation or when it fails for other reasons
> > >
> > > The branch containing those modification is available here [1], and the
> > > script (which is internally using modetest) is here [2] (please note
> > > that I'm bad at writing shell scripts :-)).
> > >
> > > Note that those modification tend to over-estimate the load, and thus
> > > reject setups that might have previously worked, so we might want to
> > > adjust the limits to avoid that.
> > >
> > > [1]https://github.com/bbrezillon/linux/tree/vc4/hvs-bandwidth-eval
> > > [2]https://github.com/bbrezillon/vc4-hvs-bandwidth-test
> >
> > Any interest in using igt to test this stuff? We have at least a bunch of
> > tests already in there that try all kinds of plane setups. And we use
> > those to hunt for underruns on i915 hw.
> >
> > Wrt underrun reporting: On i915 we just dump them into dmesg at the error
> > level, using DRM_ERROR, plus a tracepoint. See e.g.
> > intel_pch_fifo_underrun_irq_handler(). If there's interest we could
> > perhaps extract this into something common, similar to what was done with
> > crc support already.
>
> Sounds like a good idea. I'll have a look at what's done in the intel
> driver and will check how feasible this is to have a common
> infrastructure to test that in igt.
>
> Thanks for the pointers.
>
>
> > > +static int vc4_load_tracker_atomic_check(struct drm_atomic_state *state)
> > > +{
> > > +   struct drm_plane_state *old_plane_state, *new_plane_state;
> > > +   struct vc4_dev *vc4 = to_vc4_dev(state->dev);
> > > +   struct vc4_load_tracker_state *load_state;
> > > +   struct drm_private_state *priv_state;
> > > +   struct drm_plane *plane;
> > > +   int ret, i;
> > > +
> >
> > You're missing the modeset locking for vc4->load_tracker. See the
> > kerneldoc for drm_atomic_get_private_obj_state().
>
> Hm, I missed this part of the doc, and I thought we were protected by
> the modeset lock.
>
> > Probably a good time to
> > implement the locking refactoring idea I have and just implement a per
> > private_obj lock, and remove all the ad-hoc locking from all the callers?
>
> Let me see if I get it correctly. You want to add a lock to the
> drm_private_obj struct, right?

Yup, plus automatically grab that lock in get_private_obj_state, and
then drop the now superfluous locking from all the callers.

>
> >
> > Would definitely simplify the code, and avoid "oops no locking" issues
> > like here.
>
> Yep, I agree.
>
> >
> > Cheers, Daniel
> >
> > > +   priv_state = drm_atomic_get_private_obj_state(state,
> > > +                                                 &vc4->load_tracker);
> > > +   if (IS_ERR(priv_state))
> > > +           return PTR_ERR(priv_state);
> > > +
> > > +   load_state = to_vc4_load_tracker_state(priv_state);
> > > +   for_each_oldnew_plane_in_state(state, plane, old_plane_state,
> > > +                                  new_plane_state, i) {
> > > +           struct vc4_plane_state *vc4_plane_state;
> > > +
> > > +           if (old_plane_state->fb && old_plane_state->crtc) {
> > > +                   vc4_plane_state = to_vc4_plane_state(old_plane_state);
> > > +                   load_state->membus_load -= vc4_plane_state->membus_load;
> > > +                   load_state->hvs_load -= vc4_plane_state->hvs_load;
> > > +           }
> > > +
> > > +           if (new_plane_state->fb && new_plane_state->crtc) {
> > > +                   vc4_plane_state = to_vc4_plane_state(new_plane_state);
> > > +                   load_state->membus_load += vc4_plane_state->membus_load;
> > > +                   load_state->hvs_load += vc4_plane_state->hvs_load;
> > > +           }
> > > +   }
> > > +
> > > +   /* The abolsute limit is 2Gbyte/sec, but let's take a margin to let
> > > +    * the system work when other blocks are accessing the memory.
> > > +    */
> > > +   if (load_state->membus_load > SZ_1G + SZ_512M)
> > > +           return -ENOSPC;
> > > +
> > > +   /* HVS clock is supposed to run @ 250Mhz, let's take a margin and
> > > +    * consider the maximum number of cycles is 240M.
> > > +    */
> > > +   if (load_state->hvs_load > 240000000ULL)
> > > +           return -ENOSPC;
> >
> > EINVAL is for atomic_check failures. ENOSPC isn't one of the permitted
> > errno codes, see the kernel-doc for &drm_mode_config_funcs.atomic_check.
> > atomic_commit has a different set of permissible errno codes.
>
> Okay, will change ENOSPC into EINVAL.
>
> >
> > We should probably enforce this in drm core ...
>
> Enough on my plate for now, I'll let someone else take care of
> that if you don't mind :-P.

Hm, would make for a neat starter task in Documentation/gpu/todo.rst.
If I can volunteer you for the typing :-)
-Daniel

>
> Thanks for the review.
>
> Boris



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux