On Thu, 2017-03-23 at 10:59 -0700, Clint Taylor wrote: > On 03/23/2017 10:23 AM, Jani Nikula wrote: > > On Thu, 23 Mar 2017, Clint Taylor <clinton.a.taylor@xxxxxxxxx> wrote: > >> On 03/23/2017 05:30 AM, Jani Nikula wrote: > >>> On Thu, 23 Mar 2017, clinton.a.taylor@xxxxxxxxx wrote: > >>>> From: Clint Taylor <clinton.a.taylor@xxxxxxxxx> > >>>> > >>>> Several major vendor USB-C->HDMI converters fail to recover a 5.4 GHz 1 lane > >>>> signal if the Data Link N is greater than 0x80000. > >>>> Patch detects when 1 lane 5.4 GHz signal is being used and makes the maximum > >>>> value 20 bit instead of the maximum specification supported 24 bit value. > >>>> > >>>> Cc: Jani Nikula <jani.nikula@xxxxxxxxx> > >>>> Cc: Anusha Srivatsa <anusha.srivatsa@xxxxxxxxx> > >>>> > >>> > >>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93578 > >> > >> I will add to the commit message. > >> > >>> > >>>> Signed-off-by: Clint Taylor <clinton.a.taylor@xxxxxxxxx> > >>>> --- > >>>> drivers/gpu/drm/i915/i915_reg.h | 2 ++ > >>>> drivers/gpu/drm/i915/intel_display.c | 15 +++++++++++---- > >>>> 2 files changed, 13 insertions(+), 4 deletions(-) > >>>> > >>>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h > >>>> index 04c8f69..838d8d5 100644 > >>>> --- a/drivers/gpu/drm/i915/i915_reg.h > >>>> +++ b/drivers/gpu/drm/i915/i915_reg.h > >>>> @@ -4869,6 +4869,8 @@ enum { > >>>> > >>>> #define DATA_LINK_M_N_MASK (0xffffff) > >>>> #define DATA_LINK_N_MAX (0x800000) > >>>> +/* Maximum N value useable on some DP->HDMI converters */ > >>>> +#define DATA_LINK_REDUCED_N_MAX (0x80000) > >>>> > >>>> #define _PIPEA_DATA_N_G4X 0x70054 > >>>> #define _PIPEB_DATA_N_G4X 0x71054 > >>>> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c > >>>> index 010e5dd..6e1fdd2 100644 > >>>> --- a/drivers/gpu/drm/i915/intel_display.c > >>>> +++ b/drivers/gpu/drm/i915/intel_display.c > >>>> @@ -6315,9 +6315,10 @@ static int intel_crtc_compute_config(struct intel_crtc *crtc, > >>>> } > >>>> > >>>> static void compute_m_n(unsigned int m, unsigned int n, > >>>> - uint32_t *ret_m, uint32_t *ret_n) > >>>> + uint32_t *ret_m, uint32_t *ret_n, > >>>> + uint32_t max_link_n) > >>>> { > >>>> - *ret_n = min_t(unsigned int, roundup_pow_of_two(n), DATA_LINK_N_MAX); > >>>> + *ret_n = min_t(unsigned int, roundup_pow_of_two(n), max_link_n); > >>> > >>> If there's evidence suggesting "certain other operating systems" always > >>> use a max (or fixed value) of 0x80000, perhaps we should just follow > >>> suit? Simpler and less magical. > >>> > >> > >> The other OS's don't appear to be fixed to 0x80000. The calculation in > >> i915 rounds up to the nearest power of 2 and the other OS's might have a > >> slightly different calculation to the nearest power of 2. Of course I > >> haven't seen the other OS's code to know their exact formula. HBR3 will > >> cause a higher value to be calculated and having a fixed value may cause > >> issues. The i915 formula works and reducing the value can cause > >> precision issues in the ratio with the pixel clock. > >> > >>>> *ret_m = div_u64((uint64_t) m * *ret_n, n); > >>>> intel_reduce_m_n_ratio(ret_m, ret_n); > >>>> } > >>>> @@ -6327,14 +6328,20 @@ static void compute_m_n(unsigned int m, unsigned int n, > >>>> int pixel_clock, int link_clock, > >>>> struct intel_link_m_n *m_n) > >>>> { > >>>> + uint32_t max_link_n = DATA_LINK_N_MAX; > >>>> m_n->tu = 64; > >>>> > >>>> + if ((nlanes==1) && (link_clock >= 540000)) > >>> > >>> Is the problem really dependent on these conditions? You can get the > >>> same problematic N value with nlanes == 2 && link_clock == 270000 too. > >>> > >> > >> The offending device only supports a single DP lane up to HBR2.5. This > >> check matches the datasheet for the part. The offending device works > >> with our current calculation at 1 lane HBR (270000). > > > > Okay, so what bugs me about the approach here is that this adds an > > arbitrary condition to apply a quirk to a specific device. > > > > Instead of "if device X, then apply restriction A", this adds "if > > condition Y, then apply restriction A". If I understand you correctly, > > "condition Y" is a superset of "device X", i.e. Y happens also on > > devices other than X, but on device X condition Y always holds. > > > > I'd really like it if we could come up with a) a quirk that we apply > > only on the affected device(s), or b) rules for M/N that generally make > > sense with no need to resort to seeminly arbitrary exceptions. > > > > I can detect the specific device through the DP OUI branch value > returned during DP detect. I can also detect through the device ID > string DPCD 0x503-0x508 currently not parsed in i915. Either would > satisfy Device X, Condition Y, then apply workaround A. > drm_dp_helper.c: drm_dp_downstream_id() does that. -DK > I would prefer a solution for B (rules for M/N), but the code doesn't > appear to be broken and I don't believe we should "Fix" something that > is working. The device also works by changing the roundup_pow_of_two() > to rounddown_pow_of_two() however that would apply the change to every > device connected. > > > > With the latter I mean things like reducing the M/N before rounding N up > > to power of two (M and N are always divisible by 2, for example) or > > having intel_reduce_m_n_ratio() shift them right as long as they have > > bit 0 unset. At a glance, I'm not sure if this is enough to bring down > > the N to within the limits of the device, without intentional loss of > > precision. > > > > BR, > > Jani. > > > > > >> > >>> BR, > >>> Jani. > >>> > >>>> + max_link_n = DATA_LINK_REDUCED_N_MAX; > >>>> + > >>>> compute_m_n(bits_per_pixel * pixel_clock, > >>>> link_clock * nlanes * 8, > >>>> - &m_n->gmch_m, &m_n->gmch_n); > >>>> + &m_n->gmch_m, &m_n->gmch_n, > >>>> + max_link_n); > >>>> > >>>> compute_m_n(pixel_clock, link_clock, > >>>> - &m_n->link_m, &m_n->link_n); > >>>> + &m_n->link_m, &m_n->link_n, > >>>> + max_link_n); > >>>> } > >>>> > >>>> static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv) > >>> > >> > > > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx