On Wed, Jan 30, 2019 at 10:25:51AM +0100, Linus Walleij wrote: > However for native GPIOs this driver has a quite unorthodox > loopback to request some GPIOs from the SoC GPIO chip by > looking it up from the device tree using gpiochip_find() > and then offseting hard into its numberspace. This has > been augmented a bit by using gpiochip_request_own_desc() > but this code really needs to be verified. If "native CS" > is actually an SoC GPIO, why is it even done this way? > Should this GPIO not just be defined in the device tree > like any other CS GPIO? I'm confused. [...] > + > + /* > + * Translate native CS to GPIO > + * > + * FIXME: poking around in the gpiolib internals like this is > + * not very good practice. Find a way to locate the real problem > + * and fix it. Why is the GPIO descriptor in spi->cs_gpiod > + * sometimes not assigned correctly? Erroneous device trees? > + */ The spi0 master on the Raspberry Pi has two bits in the CS register (Control & Status register bits 0 and 1) to set Chip Select. For this to work the SPI slaves' Chip Select pin needs to be connected to GPIO 8 (Chip Select 0) / GPIO 7 (Chip Select 1) or alternatively to GPIO 36 (Chip Select 0) / GPIO 35 (Chip Select 1). And those GPIO pins need to be set to function "alt0" in the pin controller. Martin Sperl found out that controlling Chip Select in this way is unreliable, cf. commit e3a2be3030e2 ("spi: bcm2835: fill FIFO before enabling interrupts to reduce interrupts/message"): "To reduce the number of interrupts/message we fill the FIFO before enabling interrupts - for short messages this reduces the interrupt count from 2 to 1 interrupt. There have been rare cases where short (<200ns) chip-select switches with native CS have been observed during such operation, this is why this optimization is only enabled for GPIO-CS." For this reason Martin amended the driver with commit a30a555d7435 ("spi: bcm2835: transform native-cs to gpio-cs on first spi_setup") such that Chip Select is never controlled via the two bits in the CS register, but rather by having the SPI core drive the GPIO pins explicitly. I have sinced added further optimizations which likely wouldn't work reliably with native CS. The device tree in the downstream (Foundation) kernel tree has the cs-gpios property, hence uses GPIO Chip Select by default. The upstream (mainline) kernel does not have the cs-gpios property, so I suppose on that one spi-bcm2835.c converts the native CS to a GPIO CS on ->setup(). Older Foundation device trees may likewise lack the cs-gpios property, I'm not sure. The code in bcm2835_spi_setup() is broken in that it always assumes that GPIO 8 / 7 is used. It should instead check whether the function of GPIO 8 / 7 or GPIO 36 / 35 is "alt0". If neither of GPIO 8 / 36 (or 7 / 35 if "reg" is 1) is set to "alt0", the function should return a negative error code. I'm not sure what the expected behaviour is if *both* GPIO 8 / 36 (or 7 / 35) are set to "alt0". I suspect the correct thing to do would be to drive both pins because we don't know which one the slave is attached to. Because the SPI core only supports a single CS GPIO per slave, it would be necessary to request both pins in the driver and add a ->set_cs() callback which sets either or both pins depending on which one is set to "alt0". So, it's complicated. Personally I've never used the native CS to GPIO CS conversion mechanism (because the Revolution Pi ships with a Foundation-based kernel and DT). By the way, the spec reference for the above is page 155 (CS register of the SPI master) and 102 (pin controller function table): https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf Thanks, Lukas