Re: [PATCH 1/8] iio: backend: add API for interface tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2024-04-22 at 18:13 +0100, Jonathan Cameron wrote:
> 
> > >   
> > > > + *
> > > > + * Get the current state of the backend channel. Typically used to check if
> > > > + * there were any errors sending/receiving data.
> > > > + *
> > > > + * RETURNS:
> > > > + * 0 on success, negative error number on failure.
> > > > + */
> > > > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > > > +			    struct iio_backend_chan_status *status)
> > > > +{
> > > > +	return iio_backend_op_call(back, chan_status, chan, status);
> > > > +}
> > > > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > > > +
> > > > +/**
> > > > + * iio_backend_iodelay_set - Set digital I/O delay
> > > > + * @back:	Backend device
> > > > + * @lane:	Lane number
> > > > + * @tap:	Number of taps
> > > > + *
> > > > + * Controls delays on sending/receiving data. One usecase for this is to
> > > > + * calibrate the data digital interface so we get the best results when
> > > > + * transferring data. Note that @tap has no unit since the actual delay per
> > > > tap
> > > > + * is very backend specific. Hence, frontend devices typically should go
> > > > through
> > > > + * an array of @taps (the size of that array should typically match the size
> > > > of
> > > > + * calibration points on the frontend device) and call this API.
> > > > + *
> > > > + * RETURNS:
> > > > + * 0 on success, negative error number on failure.
> > > > + */
> > > > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > > > +			    unsigned int tap)  
> > > 
> > > taps maybe given it's a number of them?  
> > 
> > yeps...
> > 
> > > Is this an industry standard term - sounds like it probably is but my
> > > google fu is failing.
> > >   
> > 
> > Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx
> > IDELAY
> > control you may found something. I could not find a good name (originally I just
> > had
> > 'delay' but without a proper unit it felt weird), so I admit I used the one it
> > made
> > more sense for my specific usecase. Open to suggestions though :).
> 
> Taps is fine.
> 
> 
> > > >  
> > > > +/* vendor specific from 32 */
> > > > +enum iio_backend_test_pattern {
> > > > +	/* modified prbs9 */
> > > > +	IIO_BACKEND_ADI_PRBS_9A = 32,  
> > > 
> > > Not knowing anything much about this, does it make sense to use an enum,
> > > or should we face facts that we can't have a true generic interface
> > > and just use a suitably sized int?
> > >   
> > 
> > I'm also not a fan of the above but we do have generic/standard patterns in this
> > core
> > (and that could be used by others):
> > 
> > - 0x0: pn9a (device specific, modified pn9)
> > - 0x1: pn23a (device specific, modified pn23)
> > - 0x4: pn7 (standard O.150)
> > - 0x5: pn15 (standard O.150)
> > - 0x6: pn23 (standard O.150)
> > - 0x7: pn31 (standard O.150)
> > - 0x9: pnX (device specific, e.g. ad9361)
> > - 0x0A: Nibble ramp (Device specific e.g. adrv9001)
> > - 0x0B: 16 bit ramp 
> > 
> > Lucky enough the user we have for this is only using a custom/modified pattern.
> > my
> > issue with the int is that how do frontends know what value do they need to pass
> > into
> > the API? It would really be very backend specific. I know we do expect frontends
> > to
> > have some assumed knowledge on the backend they're connected too but I would like
> > to
> > avoid making those assumptions bigger than they need to be.
> > 
> > My expectation with the enum is that we can have some "contract" between backends
> > and
> > frontends on the pattern to use. I guess we could give it a try (unless you have
> > some
> > other idea) and if it starts going out of control, I can assume defeat and change
> > it
> > to an int.
> > 
> > Or, is the idea to just have the int parameter and some plain defines in the
> > backend
> > header?
> 
> Keep it as an enum for now and let's see where this goes.  Things called 
> 'modified' are always ominous.  Modified how?  The standard defined ones
> are easier to argue for.
> 
> 
> > 
> > > How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> > > or something like that.
> > >   
> > 
> > Since this is on the input direction (and for our particular core), we don't have
> > to
> > unset it. When you choose a test pattern, it just tells the core to match for a
> > specific signal/pattern. So when you do start getting "real" data, we may still
> > have
> > those status bits saying there are "errors" but in reality we don't care. We just
> > care during the tuning/calibration procedure as we configure matching patters
> > between
> > frontend and backend...
> > 
> > OTOH for the axi-dac, for example, we do need to unset the test pattern. And we
> > do
> > that by (re)configuring the internal CW tone or the external data source
> > (typically
> > some DMA core).
> 
> Can we unset it for both input and output?  May make no difference, but easier to
> reason about
> perhaps.
> 

Yeah, from an API point of view it would make sense for frontends to explicitly set
IIO_BACKEND_NO_TESTPATERN after they are done with it. On the input device (and on
the ADI specific core) that would be a no-op. But for the output device things become
a bit more ambiguous. On the ADI axi-dac, I guess this would mean setting the
internal CW tone (as tuning is not expected to happen during buffering and the
internal CW tone is the default data source).

Yeah, there's a bit of overlapping between tuning and [1]. While from an output
device point of view, it could make sense to have the tuning test patterns as part of
the internal signals, for an input device, that would not make much sense (I think).
Hence, I decided to have the test pattern separated from the data source enum. But I
guess now is the correct time to bring this up so we can decide otherwise :).

Also, on a second thought, on the axi-dac driver, calling
axi_dac_data_source_set(..., IIO_BACKEND_INTERNAL_CONTINUOS_WAVE) on
IIO_BACKEND_NO_TESTPATERN does not look that wrong...

[2]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/drivers/iio/dac/adi-axi-dac.c?h=testing#n449
[1]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/include/linux/iio/backend.h?h=testing#n19

- Nuno Sá






[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux