On 11/26/2014 02:06 PM, Octavian Purdila wrote:
On Wed, Nov 19, 2014 at 3:32 PM, Octavian Purdila
<octavian.purdila@xxxxxxxxx> wrote:
On Tue, Nov 18, 2014 at 9:35 PM, Octavian Purdila
<octavian.purdila@xxxxxxxxx> wrote:
On Tue, Nov 18, 2014 at 7:23 PM, Lars-Peter Clausen <lars@xxxxxxxxxx> wrote:
As far as I understood, the proposed watermark implementation only
affects the device buffer and as I mentioned above that will not help
with reducing the interrupt rate.
By setting the watershed level the userspace application tells you that
it
is OK with getting data with a higher latency than one sample. This
allows
the driver to configure the FIFO level and hence reduce the interrupt
rate.
Hi Lars,
The implementation (as proposed in the patch by Josselin and Yannick)
does not inform the driver of changes to watermark, that is only
visible to core iio / buffer logic.
That should be trivial to add though.
True. I've actually started by implementing hardware fifo support as a
new type of iio buffer, but I got scared by the buffer demux stuff. I
can take another stab at it, if that sounds better?
OK, I remembered why I bailed on that approach: it would break the
callback buffer. It looks like the buffer cb infrastructure relies on
a push model and for a hardware fifo implemented as a iio buffer we
would need a pull model. While there is one driver that takes this
approach (sca3000_ring.c) it is in staging and the hardware buffer
part seems to be marked as RFC.
Hi again,
I want to revive this thread to help us start moving in the right
direction. Here is a summary of things discuss so far:
1. The watermark patch from Josselin and Yannick does not allow
reducing the interrupt rate because watermark is not propagated to the
driver level. It lacks setting the fifo mode (which is important for
Android use-cases). Also, we either need the timeout parameter or an
explicit flush trigger.
2. There are two approaches to implement hardware buffering:
a) The one exemplified in the current patch set, where we have
hardware buffers and based on interrupt or software trigger we push
data to device buffers. I'm going to call this the push model.
b) Implementing the hardware buffer as an iio buffer. That basically
means having the driver implement the read_first_n to read data
directly from the hardware buffer. I am going to call this the pull
model.
Although the pull model seems more natural it has some disadvantages:
it breaks the callback buffers (which do not seem to be used though),
it breaks in the case where we have a single hardware buffer but we
server multiple iio devices (sensor hub).
The push model has the disadvantage that we are using double buffering
and that we need to match the software and hardware fifo policies.
So, to move forward, I would like to build consensus on what is the
preferred model: push or pull?
The two models have completely different semantics and which is the
preferred probably depends on the use case and not necessarily on the device
itself. Right now for IIO the buffered access uses a push approach and the
sysfs access uses a pull approach. Although when using sysfs there is no
buffering and is supposed to always fetch the latest conversion (Which is
what the BCM150 seems to call bypass mode).
There might be a need to add support for switching between the two different
modes depending on the userspace use-case. But to be honest I can't see how
a pure pull based approach would be that use full.
Maybe hybrid is the way to go with the kernel driver moving samples from the
hardware buffer to the software buffer if the hardware buffer gets to full.
But at the same time allow userspace to read directly from the hardware
buffer if there are no more samples in the software buffer.
- Lars
--
To unsubscribe from this list: send the line "unsubscribe linux-iio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html