Re: [PATCH] spi: set bits_per_word based on controller's bits_per_word_mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 24, 2019 at 02:54:37PM +0200, Alvaro Gamez Machado wrote:

> I think then the only way this would be feasible is to check if 8 bits is an
> acceptable number for the master and, if it isn't, apply the lowest
> available data width. I believe this cannot break anything, as it leaves 8
> as the default unless the master can't work with that number, in which case
> it really doesn't matter what client device wants because the hardware can't
> provide it.

No, that still leaves the slave driver thinking it's sending 8 bits when
really it's sending something else - the default is just 8 bits, if the
controller can't do it then the transfer can't happen and there's an
error.  It's not a good idea to carry on if we're likely to introduce
data corruption.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux