On čtvrtek 15. února 2018 16:19:07 CET, Mark Brown wrote:
I think this is a sensible and reasonable thing to want to do however I
think we should move this from a DT property to being something in the
transfer structure which drivers then implement. That way if a device
has an inter-word delay requirement it can go in the individual device
driver so it always gets applied. Does that make sense to you?
Yep, that indeed makes sense (Damn, my easy attempt didn't make it :).)
How should the interface look like? Is spi_controller->mode_bits a good
place for controllers to advertise this feature (SPI_WORD_DELAY, perhaps?)
as supported, with spi-orion being the only implementation now? If we allow
each spi_transfer to override this value, is there a common place in the
SPI core which somehow validates a spi_transfer against each controller's
capabilities? I see a code like that (bad_bits, ugly_bits) in spi_setup
which looks like something to be called per-device, not per-transfer.
My most important use case is, however, the userspace-facing spidev, and
its ioctl complicates stuff a bit. The relevant struct has a 16bit padding
now (used to be 32bit prior to
https://patchwork.kernel.org/patch/3715391/). Is it OK to use this padding
for this feature? Or should I perhaps eat just 8bits and limit the delays
to an arbitrary value of 255us? Or should I not care about that now and let
somebody else come up with another "bigger" ioctl when they need that
space?
With kind regards,
Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html