On Mon, Mar 13, 2017 at 06:25:53PM +0100, Adrian Fiergolski wrote: > Hi Mark, Please don't top post, reply in line with needed context. This allows readers to readily follow the flow of conversation and understand what you are talking about and also helps ensure that everything in the discussion is being addressed. > In my case, xilinx_spi_probe function (of spi-xilinx controller) sets > bits_per_word_mask of spi_master struct only to 16 bits support. Later, > xilinx_spi_probe calls of_register_spi_devices, which calls > of_register_spi_devices. The last one allocates an empty spi_device > struct and configures different options of the spi_device according to a > device tree. bits_per_word are not covered here (why?), thus it is left > 0 (value after allocation), which, by convention, means 8 bits support. > At the end, the same function (of_register_spi_device) calls > spi_add_device which finally calls spi_setup. The last call, according > to convention, changes bits_per_word to 8 and calls > __spi_validate_bits_per_word which fails, as master doesn't support 8 > bit transmission. This fails registration sequence of a device driver. > As you see, the device driver doesn't have possibility to modify > bits_per_word during the registration process, thus it can't provide > support for such limited controllers. I can't see any way in which it follows from the above that it's a good idea to try to override bits per word settings in the device tree, that just wastes user time and is an abstraction failure. We need better handling of defaults done purely in the kernel.
Attachment:
signature.asc
Description: PGP signature