On Tue, Oct 16, 2018 at 03:12:54PM +0200, Daniel Vetter wrote: > On Tue, Oct 16, 2018 at 3:10 PM Boris Brezillon > <boris.brezillon@xxxxxxxxxxx> wrote: > > > > Hi Daniel, > > > > On Tue, 16 Oct 2018 14:57:43 +0200 > > Daniel Vetter <daniel@xxxxxxxx> wrote: > > > > > On Tue, Oct 16, 2018 at 11:40:45AM +0200, Boris Brezillon wrote: > > > > The HVS block is supposed to fill the pixelvalve FIFOs fast enough to > > > > meet the requested framerate. The problem is, the HVS and memory bus > > > > bandwidths are limited, and if we don't take these limitations into > > > > account we might end up with HVS underflow errors. > > > > > > > > This patch is trying to model the per-plane HVS and memory bus bandwidth > > > > consumption and take a decision at atomic_check() time whether the > > > > estimated load will fit in the HVS and membus budget. > > > > > > > > Note that we take an extra margin on the memory bus consumption to let > > > > the system run smoothly when other blocks are doing heavy use of the > > > > memory bus. Same goes for the HVS limit, except the margin is smaller in > > > > this case, since the HVS is not used by external components. > > > > > > > > Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxx> > > > > --- > > > > This logic has been validated using a simple shell script and > > > > some instrumentation in the VC4 driver: > > > > > > > > - capture underflow errors at the HVS level and expose a debugfs file > > > > reporting those errors > > > > - add debugfs files to expose when atomic_check fails because of the > > > > HVS or membus load limitation or when it fails for other reasons > > > > > > > > The branch containing those modification is available here [1], and the > > > > script (which is internally using modetest) is here [2] (please note > > > > that I'm bad at writing shell scripts :-)). > > > > > > > > Note that those modification tend to over-estimate the load, and thus > > > > reject setups that might have previously worked, so we might want to > > > > adjust the limits to avoid that. > > > > > > > > [1]https://github.com/bbrezillon/linux/tree/vc4/hvs-bandwidth-eval > > > > [2]https://github.com/bbrezillon/vc4-hvs-bandwidth-test > > > > > > Any interest in using igt to test this stuff? We have at least a bunch of > > > tests already in there that try all kinds of plane setups. And we use > > > those to hunt for underruns on i915 hw. > > > > > > Wrt underrun reporting: On i915 we just dump them into dmesg at the error > > > level, using DRM_ERROR, plus a tracepoint. See e.g. > > > intel_pch_fifo_underrun_irq_handler(). If there's interest we could > > > perhaps extract this into something common, similar to what was done with > > > crc support already. > > > > Sounds like a good idea. I'll have a look at what's done in the intel > > driver and will check how feasible this is to have a common > > infrastructure to test that in igt. > > > > Thanks for the pointers. > > > > > > > > +static int vc4_load_tracker_atomic_check(struct drm_atomic_state *state) > > > > +{ > > > > + struct drm_plane_state *old_plane_state, *new_plane_state; > > > > + struct vc4_dev *vc4 = to_vc4_dev(state->dev); > > > > + struct vc4_load_tracker_state *load_state; > > > > + struct drm_private_state *priv_state; > > > > + struct drm_plane *plane; > > > > + int ret, i; > > > > + > > > > > > You're missing the modeset locking for vc4->load_tracker. See the > > > kerneldoc for drm_atomic_get_private_obj_state(). > > > > Hm, I missed this part of the doc, and I thought we were protected by > > the modeset lock. > > > > > Probably a good time to > > > implement the locking refactoring idea I have and just implement a per > > > private_obj lock, and remove all the ad-hoc locking from all the callers? > > > > Let me see if I get it correctly. You want to add a lock to the > > drm_private_obj struct, right? > > Yup, plus automatically grab that lock in get_private_obj_state, and > then drop the now superfluous locking from all the callers. You mean this? https://patchwork.freedesktop.org/patch/205943/ -- Ville Syrjälä Intel _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel