Re: [PATCH 4/4] iio: ad7949: fix channels mixups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 23 Sep 2019 10:21:49 +0200
Andrea Merello <andrea.merello@xxxxxxxxx> wrote:

> Il giorno sab 21 set 2019 alle ore 19:12 Jonathan Cameron
> <jic23@xxxxxxxxxx> ha scritto:
> 
> > >
> > > If we skip the configuration rewrite when the channel doesn't change -
> > > as discussed above - then we actually _terminate_ the acquisition when
> > > the IIO read is triggered, that is we are converting the value sampled
> > > right before the IIO read.
> > >
> > > If this is OK then I'll go on, otherwise I think that we should always
> > > do the three cycles (so that triggering IIO read always waits also for
> > > a new acquisition phase)  
> 
> I had a discussion about this with a HW engineer, he said that it's
> probably not necessary to perform a full extra cycle (i.e. SPI xfer +
> udelay(2)), rather, since the HW is already in acquisition, it should
> be enough to perform the udelay(2) to make sure the internal capacitor
> settles (if we change channel of course we need also the SPI xfer to
> update the CFG).
> 
> So indeed it seems to me that:
> - if CFG (channel) changes: we need three full SPI xfer (actual SPI
> xfer + delay(2))
> - if CFG (channel) doesn't change: we need a delay(2) [*]- to
> guarantee the user sees a value sampled after the IIO read, as
> discussed - and two full SPI xfer (actual SPI xfer + delay(2))
> 
> .. Indeed I also wonder if it would be enough to cycle the CS, without
> performing a full SPI xfer, in order to start the conversion.. But
> given that this whole thing seems to me a bit complicated and unclear,
> I would stick to the dummy cycle for now..
> 
> > An excellent point.  I agree and suspect we may have this wrong in other
> > sensors.  The question gets more interesting if running in buffered mode
> > as we have had systems using a trigger generated by an external process.
> > I suppose in that case we just have to deal with the offset in the fifo
> > etc in userspace.  
> 
> Yes. I'm not familiar with IIO buffered mode, but for a streaming,
> continuous, read mode I guess that the user would expect some latency
> anyway (might be due to the ADC or the buffering mechanism itself or
> whatever).

There are a few ugly corners.
1) They'd normally expect the timestamp to be as closely aligned as
possible with the reading in a given scan.  Perhaps we can think about
how to offset that if we know we are actually looking at the one before
last...
2) There are tightly coupled setups where we are switching a mux in
front of the sensor and driving a trigger off that action.
                                                _________
 _________           |--------MUX CNTRL--------|         |
|         |        __|__        ---TRIGGER-----|         |
| INPUT 1 |-------|     |     __|__            |         |
|_________|       |     |    |     |           |   HOST  |
 _________        | MUX |----| ADC |-----------|         |
|         |       |     |    |_____|           |         |
| INPUT 2 |-------|_____|                      |_________|   
|_________| 

This gets represented as a single 'device' that is a consumer
of the ADC channel, and also provides the trigger and handles
the mux control.

Once you have stitched it all together the expectation is that
for each 'scan':
1. Set MUX
2. Allow short settling time
3. Trigger the ADC via gpio or SPI triggered read etc.
4. ADC result comes back.
5. Goto 1.

The full set of inputs are sampled to make up one 'scan' in
the buffer of the container driver.

Now in theory you could interleave the flows so that you know
the data is coming 2 scan's later, but there isn't currently
a way for the 'container' driver to know that is happening,
so the assumption it makes is that everything is 'direct'.

We could add some means of reporting an offset from trigger
to sample into fifo as that would allow such a container
driver to adjust it's mux settings to take that into account.

Note that out container driver also often has it's own
capture trigger coming in so it can get really fiddly as
we don't have any way of knowing if we are in a round robin
situation or just taking a single 'scan'. In single scan
case we want to drop the excess reads from the end, in round robin
we want to keep them as they are the start of the next scan.

> 
> I didn't look into this buffered thing to see how it works, but maybe
> we can skip the first udelay(2) [*] in the driver in case of buffered
> access?
> 
> > Maybe we should think about adding a note to be careful of that.  Not
> > really sure where we would note it though...  
> 
> IMHO clarifying what the API guarantees and what it doesn't in IIO
> DocBook could be helpful (I didn't see it, but I might have missed
> it)..
I agree we should clarify it, but I'm still not totally sure what the
preferred case is! Perhaps best plan is to put forward a patch to add
a description of one of the choices as we can push people to review
that closely as it may mean devices don't comply with the ABI.

There is a 3rd option which is to add the option for devices to describe
what they are doing via a new sysfs attribute.  That may get us
around causing potential breakage in drivers that are already doing it
the 'wrong' way.

> 
> So, we could state that a single shot read must return a value sampled
> after the read has been shot, and that on the other hand, when using
> buffered mode one should expect some latency.. But indeed, since you
> said that we might have a number of IIO drivers that actually behave
> in a different way, then I'm not sure that it's a good idea; maybe we
> could just drop this requirement and allow (and document) that a
> single shot IIO read could just _terminate_ the sampling and trigger
> the conversion on the current sampled signal? (so also in this driver
> we can just not care about this)..

I don't think we need to worry about the difference between a sample
window stopping vs one starting at the trigger.  Right now we don't
even say how long that window is, so a naive assumption would be that
we model it as instantaneous.  For single shot either model is
fine to my mind. We get the nearest practical signal to the point
of reading.  The sysfs interface is slow enough that any fine
grained control of timing is likely to be garbage anyway :)
The only exception would be really slow sensors such as light sensors
where we might be talking 100s of msecs.  In those I'd argue
we do want the capture to start only after we ask.  I think we are
fine on existing drivers for those.

Thanks,

Jonathan

> 
> > Thanks,
> >
> > Jonathan
> >
> >  




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Input]     [Linux Kernel]     [Linux SCSI]     [X.org]

  Powered by Linux