On Thu, Oct 12, 2017 at 08:37:10PM +0300, Ville Syrjälä wrote: > On Wed, Oct 11, 2017 at 04:19:45PM -0700, Manasi Navare wrote: > > In case of eDP because the panel has a fixed mode, the link rate > > and lane count at which it is trained corresponds to the link BW > > required to support the native resolution of the panel. In case of > > panles with lower resolutions where fewer lanes are hooked up internally, > > that number is reflected in the MAX_LANE_COUNT DPCD register of the panel. > > So it is pointless to fallback to lower link rate/lane count in case > > of link training failure on eDP connector since the lower link BW > > will not support the native resolution of the panel and we cannot > > prune the preferred mode on the eDP connector. > > > > In case of Link training failure on the eDP panel, something is wrong > > in the HW internally and hence driver errors out with a loud > > and clear DRM_ERROR message. > > > > Cc: Clinton Taylor <clinton.a.taylor@xxxxxxxxx> > > Cc: Jim Bride <jim.bride@xxxxxxxxxxxxxxx> > > Cc: Jani Nikula <jani.nikula@xxxxxxxxxxxxxxx> > > Cc: Ville Syrjala <ville.syrjala@xxxxxxxxxxxxxxx> > > Cc: Dave Airlie <airlied@xxxxxxxxxx> > > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> > > Signed-off-by: Manasi Navare <manasi.d.navare@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/intel_dp_link_training.c | 25 ++++++++++++++++--------- > > 1 file changed, 16 insertions(+), 9 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c > > index 05907fa..bcccef1 100644 > > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c > > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c > > @@ -328,14 +328,21 @@ intel_dp_start_link_train(struct intel_dp *intel_dp) > > return; > > > > failure_handling: > > - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d", > > - intel_connector->base.base.id, > > - intel_connector->base.name, > > - intel_dp->link_rate, intel_dp->lane_count); > > - if (!intel_dp_get_link_train_fallback_values(intel_dp, > > - intel_dp->link_rate, > > - intel_dp->lane_count)) > > - /* Schedule a Hotplug Uevent to userspace to start modeset */ > > - schedule_work(&intel_connector->modeset_retry_work); > > + /* Dont fallback and prune modes if its eDP */ > > + if (!intel_dp_is_edp(intel_dp)) { > > + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d", > > + intel_connector->base.base.id, > > + intel_connector->base.name, > > + intel_dp->link_rate, intel_dp->lane_count); > > + if (!intel_dp_get_link_train_fallback_values(intel_dp, > > + intel_dp->link_rate, > > + intel_dp->lane_count)) > > + /* Schedule a Hotplug Uevent to userspace to start modeset */ > > + schedule_work(&intel_connector->modeset_retry_work); > > + } else > > {} missing around the else. The conventio is to put {} around every > branch if at least one branch needs them. > > > + DRM_ERROR("eDP [CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d", > ^^^ > That's redundant since the connector name will alrady say "eDP-<something>" > > Apart from that this seems better than blindly pruning the panel's > native mode. If we ever hit this on real hardware we may have to think > about your other idea of trying to reduce the link params in a way > that doesn't result the loss of the native mode. > Yes I agree. I started coding that logic at first but it becomes a little bit of a stretch considering the fact that we should never run into that situation. So I agree that unless we actually see this on a real HW we should just avoid lowering link rate/lane count. > With the redundant stuff in the error message dropped and {} added this is > Reviewed-by: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> > Thanks for the review comments. Yes i will fix the debug message and add the necessary {}. Manasi > > + intel_connector->base.base.id, > > + intel_connector->base.name, > > + intel_dp->link_rate, intel_dp->lane_count); > > return; > > } > > -- > > 2.1.4 > > -- > Ville Syrjälä > Intel OTC _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx