On Fri, Nov 18, 2016 at 9:43 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: > On Thu, Nov 17, 2016 at 06:03:46PM -0800, Dhinakaran Pandiyan wrote: >> drm_dp_find_vcpi_slots() returns an error when there is not enough >> available bandwidth on a link to support a mode. This error should make >> compute_config() to fail. Not returning false could end up in a modeset >> which will not work. >> >> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@xxxxxxxxx> >> --- >> drivers/gpu/drm/i915/intel_dp_mst.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c >> index e21cf08..4280a83 100644 >> --- a/drivers/gpu/drm/i915/intel_dp_mst.c >> +++ b/drivers/gpu/drm/i915/intel_dp_mst.c >> @@ -63,6 +63,10 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder, >> >> pipe_config->pbn = mst_pbn; >> slots = drm_dp_find_vcpi_slots(&intel_dp->mst_mgr, mst_pbn); >> + if (slots < 0) { >> + DRM_ERROR("not enough available time slots for pbn=%d", mst_pbn); > > No DRM_ERROR for cases that are expected to fail, i.e. DRM_DEBUG_KMS is > the right level. > > And I don't think this works correctly either. Assume you do an atomic > modeset where you enable 2 mst sinks at the same time, and the following > happens: > - mst connector 1 can be allocated, and passes > intel_dp_mst_compute_config. > - mst connector 2 can be allocated, but not together with connector 1. > But drm_dp_find_vcpi_slots only checks what's available, it doesn't do a > temporary reservation. > > And we can just reserve the slot in drm_dp_find_vcpi_slots, because then > in the above case (when connector 2 doesn't have enough slots anymore) > we'd leak the vcpi allocation for connector 1. > > Even worse, when we do a TEST_ONLY atomic commit (i.e. only run > atomic_check/compute_config code, but not commit anything) this can even > happen for a successful commit. > > Long story short: To fix this properly we need to rewrite the dp helpers > to be compliant with atomic design. I think for that we roughly need: > > - Exract vcpi allocations into a free-standing state structure. I'd call > it drm_dp_mst_state or similar. Provide duplicate(get_state)/release > functions like we already have for plane, connector and crtc states in > the core, and e.g. dpll configuration in i915/intel_dpll_mgr.c. Ander > Conselvan can help you with this. I'm also planning to write better > documentation for how to do this exactly (since you also need a ww_mutex > to protect this state), and I'll prioritize that work. > > - Wire up that dp mst state at the right places globally (we need one slot > per drm_dp_mst_topology_mgr, i.e. per port), and duplicate that state in > intel_dp_mst_compute_config. We can't wire this up anywhere in the core > since the dp mst library is a helper library, so all the integration > points need to be done explicitly in drivers. > > - Do the same for nouveau - nouveau is now also atomic, and it also is > atomic with mst support. > > - While doing all that make sure that the existing (non-atomic-compliant) > functions in the dp mst helpers keep working, we need those for amggpu. > > - Create a new drm_dp_state_allocate_vcpi_slots, which only touches the > new drm_dp_mst_state structure and allocats the vcpi slots there. Also > add some function to find the allocation for each sink again (we need > that in the modeset commit functions). > > - Rework our compute_config and modeset code to use this new function, and > not the old legacy find/allocate functions. > > To make this happen we need buy-in from Ben Skeggs (nouveau maintainer) > and preferrably also from the AMD display team (since they support mst > already, and also plan to eventually support atomic). > > Fixing this correctly is unfortunately a _lot_ more work than your simple > patch here :( Adding Ben&Alex&Harry. Alex/Harry, pls pull in your mst expert into this too. Thanks, Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx