Re: Looking for a solution for CPM FSL SPI driver that swaps 16 bits words

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 07, 2023 at 09:40:09PM +0000, Christophe Leroy wrote:
> Le 07/03/2023 à 22:10, Mark Brown a écrit :
> > On Tue, Mar 07, 2023 at 07:43:42PM +0000, Christophe Leroy wrote:
> >> Le 07/03/2023 à 20:19, Mark Brown a écrit :

> So yes, the problem is that the controller *is* swapping data: if I 
> format the buffer to have 0x12 0x34 and tells the controller to send it 
> as a 16 bits word, it will send 0x3412 whereas if I tell it to send as 8 
> bits words, it will send 0x12 then 0x34.

> So a driver like MAX7301 which wants to send 0x1234 as a 16-bits word 
> will write 0x1234 into the buffer. The powerpc 8xx being big endian, I 
> get 0x12 0x34 into the buffer then the controller sends word 0x3412.

So the CPU is big endian, memory has a big endian word in it and the
controller is then sending as little endian?  That would mean that the
driver for the controller is implementing SPI_LSB_FIRST mode without
being asked (IIRC there is an option to run these systems LE, I don't
know if the controller is just always little endian or always byte
swapping).  That seems like a bug in the driver for the controller.

> >> Well I not sure what you mean by swapping / not swapping data. Powerpc
> >> 8xx is natively a big endian processor like all PPC32. But its
> >> Communication Processor (CPM) is apparently fetching data as little
> >> endian when told to perform transfer of 16 bits word on SPI.

> > The default wire format for SPI is big endian (MSB first), as covered in
> > spi.h:

> >   * In-memory data values are always in native CPU byte order, translated
> >   * from the wire byte order (big-endian except with SPI_LSB_FIRST).  So
> >   * for example when bits_per_word is sixteen, buffers are 2N bytes long
> >   * (@len = 2N) and hold N sixteen bit words in CPU byte order.

> > LSB_FIRST has only one in tree user other than spidev so I'd question
> > how often it's correctly implemented.

> Well, ok, I have no problem with the wire byte order, only with how the 
> controller fetches the data from the DMA buffer. I'm not sure what I 
> write is clear, is it ?

You have a problem with the wire byte order not being what was
requested - the data should always be big endian unless otherwise
specified.

> >> So, my problem really is the GPIO MAX7301 driver which requests 16 bits
> >> transfers, because then the SPI controller sends the 2 bytes in reversed
> >> order. Do I understand correctly that from your point of view, that
> >> driver shouldn't request a 16 bits tranfer ? It is done here, in the
> >> max7301_probe() function,
> >> https://elixir.bootlin.com/linux/v6.3-rc1/source/drivers/gpio/gpio-max7301.c#L50

> > It would certainly improve interoperability with controllers to request
> > 8 bit, but so long as the driver is reading and writing data in the
> > expected format it should work perfectly well.  Looking at the lack of
> > any endianness handling in the driver that doesn't seem to be the case
> > though, it's just handling data in CPU endian format which isn't
> > portable.

> I agree, it should work but it doesn't.
> When I change max7301_probe() to set bits_per_word to 8 instead of 16 it 
> works.

So the driver should also work on a little endian system configured for
16 bit words - looking at the code I would only expect it to work on
one endianness of system.

> >> Because if I clamp the CPM SPI driver to 8 bits transfers, then I cannot
> >> anymore perform 16 bits transfer for loading my FPGA, then it means I
> >> must reduce data rate then loading the FPGA takes ages.

> > Why?

> Maybe what I wrote here isn't clear. What I mean is that if I modify CPM 
> SPI controller driver to always use 8 bits mode (as done for the QE SPI 
> controller via function mspi_apply_qe_mode_quirks() ), then it will work 
> but I will not be able to use the same speed as in 16 bits transfer mode.

Why would we tell the controller to run in 8 bit mode when you don't
need to?  What the driver tells the controller just needs to produce the
correct results given the request from the client driver, it doesn't
need to be what the client driver asked for.

> So, to sum up, the solutions I see:
> A/ Force CPM SPI controller to always use 8 bits mode.

SPI controller driver should do whatever is the most efficient thing
for the transfers it was asked to perform.

> B/ Implement a flag to allow a SPI consumer to say "please perform 
> tranfer with your bogus but performant 16 bits mode, I have worked 
> around the data order for you"

SPI_LSB_FIRST already exists, however unless a device really is
operating with words I'd really suggest not using this since it just
adds confusion and portability problems. 

> C/ Implement a byte-swapping in the CPM SPI controller when a consumer 
> asks for a 16-bits data transfer

If the device doesn't request SPI_LSB_FIRST.

> D/ Modify MAX7301 GPIO driver to use 8 bits words instead of 16 bits words.

That seems like it's needed anyway AFAICT, the driver is assuming a
particular endianness.  It could also be modified to format data in a
way that'll work with either endianness but that seems more confusing.

If your FPGA image is just a byte stream then I'd expect the driver for
it to be using 8 bit mode.  It is unclear to me if something has pre
swapped the image to contort it through whatever set of endianness
assumptions are going on here, you seem to want it to be byte swapped
but that seems like a surprising format for a FPGA image to be generated
in which makes me think something swapped the image earlier on?  AFAICT
what you're describing is a FPGA image which is a sequence of little
endian 16 bit words which need to be sent to the FPGA as a sequence of
big endian 16 bit words.

> Solution A will degrade performance by forcing transfer rate reduction.
> Solution B looks like a "home made" solution.
> Solution C means copy-and-swap into a newly allocated DMA buffer.
> Solution D is .... the best one ?

> Any other idea ?

> Which solution would you prefer ?

It seems like it'd be a lot clearer if the FPGA driver worked with a
byte stream, then the controller driver can work out the most efficient
way to actually perform that transfer.  At the minute I'm not clear what
it's actually doing here.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux