Re: [PATCH] spi: set bits_per_word based on controller's bits_per_word_mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 25, 2019 at 12:56:55PM +0100, Mark Brown wrote:
> On Fri, Oct 25, 2019 at 08:39:48AM +0200, Alvaro Gamez Machado wrote:
> 
> > to claim the specific SPI slave. It may be spidev as in my use case, or it
> > may really be any other driver. But its probe() function is never going to
> > be called because the error is not raised inside the driver, but immediately
> > after forcibly setting the default value to 8 in spi.c
> 
> Then you need to extend the validation the core is doing here to
> skip this parameter when registering the device and only enforce
> it after a driver is bound, we don't have a driver at the time we
> initially register the device so we can't enforce this.

Ok, so I can remove the call to __spi_validate_bits_per_word() from
spi_setup(), and instead place it on spi_drv_probe after calling
spi_drv_probe() and detach the device if it returns with an error. The
behaviour would be basically the same as it is now, but the slave driver
will have an opportunity to choose a different transfer width if it wants to
and can match the capabilities of the master.

> 
> > I can't modify spidev because spidev doesn't even know this is happening.
> 
> You are, at some point, going to need to set your spidev to 32
> bits per word (spidev does already support this).

If I do the former, spidev will then need to load the initial bits per word
value from DT before returning from probe(), otherwise it will end up detached
when the check above is performed.

Should I go for this?

Thanks

-- 
Alvaro G. M.



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux