On 10/4/2022 2:57 PM, Marijn Suijten wrote:
On 2022-10-04 13:22:25, Abhinav Kumar wrote:
On 10/1/2022 12:08 PM, Marijn Suijten wrote:
msm's dsi_host specifies negative BPG offsets which fill the full 8 bits
of a char thanks to two's complement: this however results in those bits
bleeding into the next parameter when the field is only expected to
contain 6-bit wide values.
As a consequence random slices appear corrupted on-screen (tested on a
Sony Tama Akatsuki device with sdm845).
Use AND operators to limit all values that constitute the RC Range
parameter fields to their expected size.
Fixes: b9080324d6ca ("drm/msm/dsi: add support for dsc data")
Signed-off-by: Marijn Suijten <marijn.suijten@xxxxxxxxxxxxxx>
---
drivers/gpu/drm/display/drm_dsc_helper.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/display/drm_dsc_helper.c b/drivers/gpu/drm/display/drm_dsc_helper.c
index c869c6e51e2b..2e7ef242685d 100644
--- a/drivers/gpu/drm/display/drm_dsc_helper.c
+++ b/drivers/gpu/drm/display/drm_dsc_helper.c
@@ -243,11 +243,11 @@ void drm_dsc_pps_payload_pack(struct drm_dsc_picture_parameter_set *pps_payload,
*/
for (i = 0; i < DSC_NUM_BUF_RANGES; i++) {
pps_payload->rc_range_parameters[i] =
- cpu_to_be16((dsc_cfg->rc_range_params[i].range_min_qp <<
+ cpu_to_be16(((dsc_cfg->rc_range_params[i].range_min_qp & 0x1f) <<
DSC_PPS_RC_RANGE_MINQP_SHIFT) |
- (dsc_cfg->rc_range_params[i].range_max_qp <<
+ ((dsc_cfg->rc_range_params[i].range_max_qp & 0x1f) <<
DSC_PPS_RC_RANGE_MAXQP_SHIFT) |
- (dsc_cfg->rc_range_params[i].range_bpg_offset));
+ (dsc_cfg->rc_range_params[i].range_bpg_offset & 0x3f));
}
Looking at some examples of this for other vendors, they have managed to
limit the value to 6 bits in their drivers:
https://gitlab.freedesktop.org/drm/msm/-/blob/msm-next/drivers/gpu/drm/i915/display/intel_vdsc.c#L532
https://gitlab.freedesktop.org/drm/msm/-/blob/msm-next/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c#L87
Perhaps, msm should do the same thing instead of the helper change.
Thanks, I should have done my due-diligence and look up how other
drivers dealt with this, but wasn't immediately expecting negative
values elsewhere.
Alas, as explained in the cover letter I opted to perform the masking in
the PPS packing code as the DSC block code also reads these values, and
would suddenly write 6-bit intead of 8-bit values to the
DSC_RANGE_BPG_OFFSET registers. Quick testing on the mentioned sdm845
platform shows no regressions, but I'm not sure if that's safe to rely
on?
I looked up the MDP_DSC_0_RANGE_BPG_OFFSET_* registers.
They take only a 6-bit value according to the SW documentation ( bits 5:0 )
It was always expecting only a 6-bit value and not 8.
So this change is safe.
If you want to move to helper, other drivers need to be changed too to
remove duplicate & 0x3f.
Sure, we only have to confirm whether those drivers also read back the
value(s) in rc_range_params, and expect / allow this to be 8 instead of
6 bits.
FWIW, this too has already been fixed in the latest downstream driver too.
What is this supposed to mean? Is there a downstream DPU project that
has pending patches needing to be upstreamed? Or is the downstream SDE,
techpack/display, or whatever it is called nowadays, slowly using more
DRM structs like drm_dsc_config and this drm_dsc_pps_payload_pack()
helper function as pointed out in an earlier mail?
No, what I meant was, the version of downstream driver based on which
the upstream DSC was made seems to be an older version. Downstream
drivers keep getting updated and we always keep trying to align with
upstream structs.
This is true not just for DSC but even other blocks.
So as part of that effort, we started using struct drm_dsc_config . That
change was made on newer chipsets. But the downstream SW on sdm845 based
on which the DSC was upstreamed seems like didnt have that. Hence all
this redundant math happened.
So this comment was more of a explanation about why this issue happened
even though latest downstream didnt have this issue.
Offtopic: are SDE and DPU growing closer together, hopefully achieving
feature parity allowing the SDE project to be dropped in favour of a
fully upstreamed DPU driver for day-one out-of-the-box mainline support
for new SoCs (as long as work is published and on its way upstream)?
There is still a lot of gap between SDE and DPU drivers at this point.
We keep trying to upstream as many features as possible to minimize the
gap but there is still a lot of work to do.
- Marijn