Re: [PATCH 2/3] drm/tiny: ili9486: Do not assume 8-bit only SPI controllers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 17, 2022 at 02:40:05PM +0100, Carlo Caione wrote:
> On 17/11/2022 12:09, Mark Brown wrote:

> > I don't understand what the commit log is saying here.  The meson-spicc
> > driver advertises support for 8 bit words, if the driver is sending data
> > formatted as a byte stream everything should be fine.
> > It may be that there is some optimisation available from taking
> > advantage of the hardware's ability to handle larger word sizes but
> > there should be no data corruption issue.

> There is no data corruption but the 16-bit pixel data have per-pixel
> bytes swapped: for example 0x55AD is sent instead of 0xAD55 and this is
> causing the wrong color to be displayed on the panel.

If the data is being unexpectedly byte swapped then clearly it is
being corrupted.  How is this byte swapping happening?  SPI
devices should default to doing 8 bit transfers, if things
randomly get put into anything other than 8 bit mode without the
client device explicitly asking for it then that seems really
bad.

> The problem is that the current code is sending data with an hardcoded
> bpw == 8 whether the data is swapped or not before the sending.

> For 8-bit only controllers the data is swapped by the MIPI DBI code but
> this is not true for controllers supporting 16-bit as well, but in both
> cases we are sending the data out the same way with an 8 bpw.

> So the same image is basically displayed differently whether the SPI
> controller supports 16 bpw or not. I'm trying to fix this by sending
> data with 16-bit bpw when the controller is supporting that.

So this is an issue in the MIPI DBI code where the interpretation
of the buffer passed in depends on both the a caller parameter
and the capabilities of the underlying SPI controller, meaning
that a driver can suddenly become buggy when used with a new
controller?  I can't really tell what the bits per word being
passed in along with the buffer is supposed to mean, I'd have
expected it to correspond to the format of the buffer but it
seems like perhaps the buffer is always formatted for 16 bits and
the callers are needing to pass in the capabilities of the
controller which is then also checked by the underlying code?
This all seems extremely confusing, I'm not surprised there's
bugs.

At the very least your changelog needs to express clearly what is
going on, the description doesn't appear to correspond to what
you're changing.

> Please note that this is what it is done also by mipi_dbi_typec3_command().

The core code does appear to have some checks for controller
capabilities...

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux