Re: [PATCH 1/2] drm/dp/i915: Fix DP link rate math

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote:
> We store DP link rates as link clock frequencies in kHz, just like all
> other clock values. But, DP link rates in the DP Spec are expressed in
> Gbps/lane, which seems to have led to some confusion.
> 
> E.g., for HBR2
> Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps
> where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion
> 
> Using link clock frequency, like we do
> Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s
> Because, each symbol has 8 bit of data, this is 2160000 kBps
> and there is no need to account for channel encoding here.
> 
> But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps
> 
> Similarly, while computing the required link bandwidth for a mode,
> there is a mysterious 1/10 term.
> This should simply be pixel_clock kHz * bpp * 1/8  to give the final
> result in kBps
> 
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@xxxxxxxxx>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++-------------------
>  1 file changed, 9 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 8f313c1..7a9e122 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>  	return min(source_max, sink_max);
>  }
>  
> -/*
> - * The units on the numbers in the next two are... bizarre.  Examples will
> - * make it clearer; this one parallels an example in the eDP spec.
> - *
> - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as:
> - *
> - *     270000 * 1 * 8 / 10 == 216000
> - *
> - * The actual data capacity of that configuration is 2.16Gbit/s, so the
> - * units are decakilobits.  ->clock in a drm_display_mode is in kilohertz -
> - * or equivalently, kilopixels per second - so for 1680x1050R it'd be
> - * 119000.  At 18bpp that's 2142000 kilobits per second.
> - *
> - * Thus the strange-looking division by 10 in intel_dp_link_required, to
> - * get the result in decakilobits instead of kilobits.
> - */
> -
>  static int
>  intel_dp_link_required(int pixel_clock, int bpp)
>  {
> -	return (pixel_clock * bpp + 9) / 10;
> +	/* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/
> +	return (pixel_clock * bpp + 7) / 8;
>  }
>  
>  static int
>  intel_dp_max_data_rate(int max_link_clock, int max_lanes)
>  {
> -	return (max_link_clock * max_lanes * 8) / 10;
> +	/* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the
> +	 * link rate that is generally expressed in Gbps. Since, 8 bits data is
> +	 * transmitted every LS_Clk per lane, there is no need to account for
> +	 * the channel encoding that is done in the PHY layer here.
> +	 */
> +

Max Link is the max link rate for the actual physical link of the DP cable.
So eventually PHY layer is going to encode the bits and generate 10 bits
for every 8 bits, so the code rate will be 8/10 and the useful net rate (sending
useful bits) will be link_rate * code rate = link_rate * 8/10. 
So the max available link rate at the link layer should be this useful net rate
and so IMHO we should consider this channel encoding into account.

Manasi
> +	return (max_link_clock * max_lanes);
>  }
>  
>  static int
> -- 
> 2.7.4
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux