I believe this is the patch discussed here: https://bugs.freedesktop.org/show_bug.cgi?id=105338 In which case.... Tested-By: Alexander Wilson On 8/17/18 4:36 PM, Manasi Navare wrote: > Thanks Lyude. > > So based on the imitial comments from Jani N, the recommendation was to disconnect > downclock_mode from drrs_init so that user can set downclock mode > independently from drrs mode. > > Jani, > So we would need following changes: > * Set the panel->downclock_mode in edp_init_connector() using intel_find_panel_downclock() > * Add downclock_mode->clock when we check against available BW in mode_valid > * Also use that in compute_config > * Then check against downclock_mode on link training fallback > > Any more changes recommended here? > > Manasi > > On Fri, Aug 17, 2018 at 04:43:22PM -0400, Lyude Paul wrote: >> On Fri, 2018-08-17 at 13:40 -0700, Manasi Navare wrote: >>> On Fri, Aug 17, 2018 at 04:32:09PM -0400, Lyude Paul wrote: >>>> After reading the discussion so far on this patch, this sounds correct! One >>>> nit >>>> pick below though: >>>> >>>> On Wed, 2018-05-16 at 19:21 -0700, Manasi Navare wrote: >>>>> This patch fixes the original commit c0cfb10d9e1de49 ("drm/i915/edp: >>>>> Do not do link training fallback or prune modes on EDP") that causes >>>>> a blank screen in case of certain eDP panels (Eg: seen on Dell XPS13 9350) >>>>> where first link training fails and a retraining is required by falling >>>>> back to lower link rate/lane count. >>>>> In case of some panels they advertise higher link rate/lane count >>>>> than whats required for supporting the panel's native mode. >>>>> But we always link train at highest link rate/lane count for eDP >>>>> and if that fails we can still fallback to lower link rate/lane count >>>>> as long as the fallback link BW still fits the native mode to avoid >>>>> pruning the panel's native mode yet retraining at fallback values >>>>> to recover from a blank screen. >>>>> >>>>> Cc: Clinton Taylor <clinton.a.taylor@xxxxxxxxx> >>>>> Cc: Jani Nikula <jani.nikula@xxxxxxxxxxxxxxx> >>>>> Cc: Ville Syrjala <ville.syrjala@xxxxxxxxxxxxxxx> >>>>> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> >>>>> Cc: Lucas De Marchi <lucas.demarchi@xxxxxxxxx> >>>>> Signed-off-by: Manasi Navare <manasi.d.navare@xxxxxxxxx> >>>>> --- >>>>> drivers/gpu/drm/i915/intel_dp.c | 25 >>>>> +++++++++++++++++++++++++ >>>>> drivers/gpu/drm/i915/intel_dp_link_training.c | 26 +++++++++------------- >>>>> ---- >>>>> 2 files changed, 34 insertions(+), 17 deletions(-) >>>>> >>>>> diff --git a/drivers/gpu/drm/i915/intel_dp.c >>>>> b/drivers/gpu/drm/i915/intel_dp.c >>>>> index 2cc58596..7f7202a 100644 >>>>> --- a/drivers/gpu/drm/i915/intel_dp.c >>>>> +++ b/drivers/gpu/drm/i915/intel_dp.c >>>>> @@ -387,6 +387,21 @@ static bool intel_dp_link_params_valid(struct >>>>> intel_dp >>>>> *intel_dp, int link_rate, >>>>> return true; >>>>> } >>>>> >>>>> +static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp >>>>> *intel_dp, >>>>> + int link_rate, >>>>> + uint8_t lane_count) >>>>> +{ >>>>> + struct drm_display_mode *fixed_mode = intel_dp->attached_connector- >>>>>> panel.fixed_mode; >>>>> + int mode_rate, max_rate; >>>>> + >>>>> + mode_rate = intel_dp_link_required(fixed_mode->clock, 18); >>>>> + max_rate = intel_dp_max_data_rate(link_rate, lane_count); >>>>> + if (mode_rate > max_rate) >>>>> + return false; >>>>> + >>>>> + return true; >>>>> +} >>>>> + >>>>> int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp, >>>>> int link_rate, uint8_t lane_count) >>>>> { >>>>> @@ -396,9 +411,19 @@ int intel_dp_get_link_train_fallback_values(struct >>>>> intel_dp *intel_dp, >>>>> intel_dp->num_common_rates, >>>>> link_rate); >>>>> if (index > 0) { >>>>> + if (intel_dp_is_edp(intel_dp) && >>>>> + !intel_dp_can_link_train_fallback_for_edp(intel_dp, >>>>> + intel_dp- >>>>>> common_rates[index-1], >>>>> + lane_count)) >>>>> + return -1; >>>>> intel_dp->max_link_rate = intel_dp->common_rates[index - 1]; >>>>> intel_dp->max_link_lane_count = lane_count; >>>>> } else if (lane_count > 1) { >>>>> + if (intel_dp_is_edp(intel_dp) && >>>>> + !intel_dp_can_link_train_fallback_for_edp(intel_dp, >>>>> + intel_dp_max_commo >>>>> n_rate(intel_dp), >>>>> + lane_count >> 1)) >>>>> + return -1; >>>> The arguments you pass to intel_dp_can_link_train_fallback_for_edp() are the >>>> same ones that you assign to intel_dp->max_link_rate and intel_dp- >>>>> max_link_lane_count, why not just set those latter two first then pass >>>>> them to >>>> intel_dp_can_link_train_fallback_for_edp() afterwards? >>> Actually I had thought of that. However if the >>> intel_dp_can_link_train_fallback_for_edp() >>> returns false then we dont wanna update the intel_dp->max_link_rate and >>> lane_count to >>> reduced fallback values. >> Ahhh, that makes sense! Ignore that nitpick then :) >> >>> The other concerns mentioned on this patch were about checking them against >>> downclock mode >>> which will be with lower refresh rate hence might fit the reduced BW. But >>> currently the >>> downclock mode is assigned only as part of intel_dp_drrs_support() so first we >>> would >>> need to may be find that in edp_init and save it as part of intel_dp or >>> something. >>> As well as testing against alternate mode. >>> Any thoughts on that? >> That sounds fine to me! It is probably a bit safer then just blindly downgrading >> the link >> >>> Manasi >>> >>>>> intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp); >>>>> intel_dp->max_link_lane_count = lane_count >> 1; >>>>> } else { >>>>> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c >>>>> b/drivers/gpu/drm/i915/intel_dp_link_training.c >>>>> index 3fcaa98..6673975 100644 >>>>> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c >>>>> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c >>>>> @@ -335,22 +335,14 @@ intel_dp_start_link_train(struct intel_dp *intel_dp) >>>>> return; >>>>> >>>>> failure_handling: >>>>> - /* Dont fallback and prune modes if its eDP */ >>>>> - if (!intel_dp_is_edp(intel_dp)) { >>>>> - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link >>>>> rate = %d, lane count = %d", >>>>> - intel_connector->base.base.id, >>>>> - intel_connector->base.name, >>>>> - intel_dp->link_rate, intel_dp->lane_count); >>>>> - if (!intel_dp_get_link_train_fallback_values(intel_dp, >>>>> - intel_dp- >>>>>> link_rate, >>>>> - intel_dp- >>>>>> lane_count)) >>>>> - /* Schedule a Hotplug Uevent to userspace to start >>>>> modeset */ >>>>> - schedule_work(&intel_connector->modeset_retry_work); >>>>> - } else { >>>>> - DRM_ERROR("[CONNECTOR:%d:%s] Link Training failed at link rate = >>>>> %d, lane count = %d", >>>>> - intel_connector->base.base.id, >>>>> - intel_connector->base.name, >>>>> - intel_dp->link_rate, intel_dp->lane_count); >>>>> - } >>>>> + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, >>>>> lane count = %d", >>>>> + intel_connector->base.base.id, >>>>> + intel_connector->base.name, >>>>> + intel_dp->link_rate, intel_dp->lane_count); >>>>> + if (!intel_dp_get_link_train_fallback_values(intel_dp, >>>>> + intel_dp->link_rate, >>>>> + intel_dp->lane_count)) >>>>> + /* Schedule a Hotplug Uevent to userspace to start modeset */ >>>>> + schedule_work(&intel_connector->modeset_retry_work); >>>>> return; >>>>> } > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx