Re: Fwd: [PATCH] SPI: Set SPI bits per words in an OF DeviceTree SPI node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Le 14/08/2013 18:19, Stephen Warren a écrit :
On 08/14/2013 07:21 AM, leroy christophe wrote:
In the node, you then have to include the 'spi-bits' property.

Exemple:
      fpga-loader@7 {
          compatible = "cs,fpga-loader";
          spi-max-frequency = <10000000>;
          reg = <7>;
          spi-cs-high;
          spi-bits = <16>;
      };
Shouldn't the driver for cs,fpga-loader be defining how many bit the SPI
transactions should use? If the binding covers HW that can operate at
various bits-per-word, it's reasonable for those individual bindings to
define a property that sets that width, but I'm not convinced it's
reasonable for the SPI core to allow any DT node to specify that, and
presumably override the SPI device's driver's selection.
This is a tricky question.

The issue in the CPM (communication co-processor) on the MPC 8xx CPU is
that, when you use the 16 bits mode, the CPM swaps the bytes, so the
driver must unswap them to use them properly.
It looks like their is the same issue with other CPUs, like the MPCs
having the Quick Engine. In the QE driver, it is done simple: the
transfert is forced at 8bits at all time regardless on what the driver
requests. This is known to work well, but I don't want to do that for
the CPM driver, because the 16 bits mode is a lot less CPU consuming
than the 8 bits mode, and my own drivers for my custom components do
work in 16 bits mode (byte swap is included in those drivers).
If there's a byte-swap bug in the CPM HW/driver, it should be solved in
the CPM driver, not in the drivers that happen to communicate over the
CPM driver. As you say below, fixing it in the client drivers is the
wrong thing to do, since then they won't work with other SPI controllers.

Either way, this doesn't seem like something that should be represented
in DT at all; it should be down to the CPM driver to indicate to the SPI
sub-system that it doesn't support 16-bit transfers.

An in general, your answer in no way addresses why this property should
apply to all SPI devices.

That's the reason why I was proposing a way, through the Device Tree, to
select the speed instead of forcing it inside the driver.
Because the MAX7301 driver for instance, which used to force 16 bits
mode, does work perfectly well on the MPC8xx in 8 bits mode, but doesn't
work in 16 bits because it doesn't unswap the bytes.
If I modify that driver to swap the bytes, it will not work anymore on
platforms that don't require bytes to be swapped.

So, really, I don't know what to do here. Do you have any suggestion ?
Drop 16-bit support from the CPM driver if it doesn't work.

But as I said, I don't want to drop this support from the driver because it does work as far as the driver is able to swap the bytes prior.
My own drivers are swapping the bytes and it works perfectly well.
I need to keep this capability because it allows a higher transfert speed than the 8 bits mode on the CPM (about 6 times), and this is important when I have to load a big FPGA on my board. But on the other hand I also need drivers like the MAX7301 to work on my board. This driver works very well when I modify it to force 8 bits mode. But in standard, the driver forces itself to 16 bits mode which makes it fail.

So I need another solution than droping the 16 bits mode from the driver.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]
  Powered by Linux