Re: [PATCH] spi: set bits_per_word based on controller's bits_per_word_mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 24, 2019 at 02:11:29PM +0100, Mark Brown wrote:
> On Thu, Oct 24, 2019 at 02:54:37PM +0200, Alvaro Gamez Machado wrote:
> 
> > I think then the only way this would be feasible is to check if 8 bits is an
> > acceptable number for the master and, if it isn't, apply the lowest
> > available data width. I believe this cannot break anything, as it leaves 8
> > as the default unless the master can't work with that number, in which case
> > it really doesn't matter what client device wants because the hardware can't
> > provide it.
> 
> No, that still leaves the slave driver thinking it's sending 8 bits when
> really it's sending something else - the default is just 8 bits, if the
> controller can't do it then the transfer can't happen and there's an
> error.  It's not a good idea to carry on if we're likely to introduce
> data corruption.

Well, yes. But I don't think that's a software issue but a hardware one.

If you have a board that has a SPI master that cannot talk to an 8 bits
device and you expect to communicate with anything that accepts 8 bits
you're not going to be able to. Either the kernel raises an error or it
shuts up and tries its best. I understand the first option is better, but I
also think that's not a software issue, that hardware simply cannot work as
is regardless of what we do in software. The hardware devices simply can't
talk to each other.


-- 
Alvaro G. M.



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux