Re: [PATCH] spi: set bits_per_word based on controller's bits_per_word_mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 24, 2019 at 02:54:37PM +0200, Alvaro Gamez Machado wrote:

> I think then the only way this would be feasible is to check if 8 bits is an
> acceptable number for the master and, if it isn't, apply the lowest
> available data width. I believe this cannot break anything, as it leaves 8
> as the default unless the master can't work with that number, in which case
> it really doesn't matter what client device wants because the hardware can't
> provide it.

No, that still leaves the slave driver thinking it's sending 8 bits when
really it's sending something else - the default is just 8 bits, if the
controller can't do it then the transfer can't happen and there's an
error.  It's not a good idea to carry on if we're likely to introduce
data corruption.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux