On Tue, May 22, 2018 at 09:46:39AM -0700, Stephen Boyd wrote: > Quoting Mahadevan, Girish (2018-05-21 08:52:47) > > Not sure I follow, the intention is to run the controller clock based on > > the slave's max frequency. > That's good. The problem I see is that we have to specify the max > frequency in the controller/bus node, and also in the child/slave node. > It should only need to be specified in the slave node, so making the > cur_speed_hz equal the max_speed_hz is problematic. The current speed of > the master should be determined by calling clk_get_rate(). We don't require that the slaves all individually set a speed since it gets a bit redundant, it should be enough to just use the default the controller provides. A bigger problem with this is that the driver will never see a transfer which doesn't explicitly have a speed set as the core will ensure something is set, open coding this logic in every driver would obviously be tiresome. > > The intention was to allow a client to specify slave specific timing > > requirements, e.g CS-CLK delay (based on the slave's data sheet). > > So that the client drivers could setup these delays and pass it in part > > of the controller_data member of the spi_device data structure. > > The header file was meant to expose these timing params that the client > > could specify. I honestly didn't know how else a client could specify > > these to the controller driver. > Do you mean spi-rx-delay-us and spi-tx-delay-us properties? Those are > documented but don't seem to be used. There's also the delay_usecs part > of the spi_transfer structure, which may be what you're talking about. delay_usecs is for inter-transfer delays within a message rather than after the initial chip select assert (it can be used to keep chip select asserted for longer after the final transfer too). Obviously this is also something that shouldn't be configured in a driver specific fashion.
Attachment:
signature.asc
Description: PGP signature