On Fri, Apr 05, 2019 at 01:13:11PM -0700, Rodrigo Vivi wrote: > On Fri, Apr 05, 2019 at 11:18:25AM -0700, Manasi Navare wrote: > > On Fri, Apr 05, 2019 at 10:52:20AM +0300, Jani Nikula wrote: > > > Commit 7769db588384 ("drm/i915/dp: optimize eDP 1.4+ link config fast > > > and narrow") started to optize the eDP 1.4+ link config, both per spec > > > and as preparation for display stream compression support. > > > > > > Sadly, we again face panels that flat out fail with parameters they > > > claim to support. Revert, and go back to the drawing board. > > > > Yup, already multiple users facing this issue with eDP 1.4 panels that require > > max parameters to pass link train. > > I was wondering if we should blacklist the panel/platform, but if > there are multiple cases it is better to revert... Yes the bug has been reported and seen by atleast 4 users on different eDP panel product IDs > > Unless there's a way to try and fallback quickly?! I submitted a patch earlier to fallback with max parameters if optimized ones dont work: https://patchwork.freedesktop.org/patch/296273/?series=58975&rev=2 But airlied suugested a revert instead. Manasi > > > > > I hear you now :) > > > > Reviewed-by: Manasi Navare <manasi.d.navare@xxxxxxxxx> > > > > > > > > v2: Actually revert to max params instead of just wide-and-slow. > > > > > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109959 > > > Fixes: 7769db588384 ("drm/i915/dp: optimize eDP 1.4+ link config fast and narrow") > > > Cc: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> > > > Cc: Manasi Navare <manasi.d.navare@xxxxxxxxx> > > > Cc: Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx> > > > Cc: Matt Atwood <matthew.s.atwood@xxxxxxxxx> > > > Cc: "Lee, Shawn C" <shawn.c.lee@xxxxxxxxx> > > > Cc: Dave Airlie <airlied@xxxxxxxxx> > > > Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx > > > Cc: <stable@xxxxxxxxxxxxxxx> # v5.0+ > > > Signed-off-by: Jani Nikula <jani.nikula@xxxxxxxxx> > > > --- > > > drivers/gpu/drm/i915/intel_dp.c | 69 +++++---------------------------- > > > 1 file changed, 10 insertions(+), 59 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c > > > index 72c490..dfa770 100644 > > > --- a/drivers/gpu/drm/i915/intel_dp.c > > > +++ b/drivers/gpu/drm/i915/intel_dp.c > > > @@ -1856,42 +1856,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp, > > > return -EINVAL; > > > } > > > > > > -/* Optimize link config in order: max bpp, min lanes, min clock */ > > > -static int > > > -intel_dp_compute_link_config_fast(struct intel_dp *intel_dp, > > > - struct intel_crtc_state *pipe_config, > > > - const struct link_config_limits *limits) > > > -{ > > > - struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; > > > - int bpp, clock, lane_count; > > > - int mode_rate, link_clock, link_avail; > > > - > > > - for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { > > > - mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, > > > - bpp); > > > - > > > - for (lane_count = limits->min_lane_count; > > > - lane_count <= limits->max_lane_count; > > > - lane_count <<= 1) { > > > - for (clock = limits->min_clock; clock <= limits->max_clock; clock++) { > > > - link_clock = intel_dp->common_rates[clock]; > > > - link_avail = intel_dp_max_data_rate(link_clock, > > > - lane_count); > > > - > > > - if (mode_rate <= link_avail) { > > > - pipe_config->lane_count = lane_count; > > > - pipe_config->pipe_bpp = bpp; > > > - pipe_config->port_clock = link_clock; > > > - > > > - return 0; > > > - } > > > - } > > > - } > > > - } > > > - > > > - return -EINVAL; > > > -} > > > - > > > static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc) > > > { > > > int i, num_bpc; > > > @@ -2028,15 +1992,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, > > > limits.min_bpp = 6 * 3; > > > limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config); > > > > > > - if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) { > > > + if (intel_dp_is_edp(intel_dp)) { > > > /* > > > * Use the maximum clock and number of lanes the eDP panel > > > - * advertizes being capable of. The eDP 1.3 and earlier panels > > > - * are generally designed to support only a single clock and > > > - * lane configuration, and typically these values correspond to > > > - * the native resolution of the panel. With eDP 1.4 rate select > > > - * and DSC, this is decreasingly the case, and we need to be > > > - * able to select less than maximum link config. > > > + * advertizes being capable of. The panels are generally > > > + * designed to support only a single clock and lane > > > + * configuration, and typically these values correspond to the > > > + * native resolution of the panel. > > > */ > > > limits.min_lane_count = limits.max_lane_count; > > > limits.min_clock = limits.max_clock; > > > @@ -2050,22 +2012,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder, > > > intel_dp->common_rates[limits.max_clock], > > > limits.max_bpp, adjusted_mode->crtc_clock); > > > > > > - if (intel_dp_is_edp(intel_dp)) > > > - /* > > > - * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4 > > > - * section A.1: "It is recommended that the minimum number of > > > - * lanes be used, using the minimum link rate allowed for that > > > - * lane configuration." > > > - * > > > - * Note that we use the max clock and lane count for eDP 1.3 and > > > - * earlier, and fast vs. wide is irrelevant. > > > - */ > > > - ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, > > > - &limits); > > > - else > > > - /* Optimize for slow and wide. */ > > > - ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, > > > - &limits); > > > + /* > > > + * Optimize for slow and wide. This is the place to add alternative > > > + * optimization policy. > > > + */ > > > + ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); > > > > > > /* enable compression if the mode doesn't fit available BW */ > > > DRM_DEBUG_KMS("Force DSC en = %d\n", intel_dp->force_dsc_en); > > > -- > > > 2.20.1 > > > _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx