Re: [PATCH 1/8] iio: backend: add API for interface tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> >   
> > > + *
> > > + * Get the current state of the backend channel. Typically used to check if
> > > + * there were any errors sending/receiving data.
> > > + *
> > > + * RETURNS:
> > > + * 0 on success, negative error number on failure.
> > > + */
> > > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > > +			    struct iio_backend_chan_status *status)
> > > +{
> > > +	return iio_backend_op_call(back, chan_status, chan, status);
> > > +}
> > > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > > +
> > > +/**
> > > + * iio_backend_iodelay_set - Set digital I/O delay
> > > + * @back:	Backend device
> > > + * @lane:	Lane number
> > > + * @tap:	Number of taps
> > > + *
> > > + * Controls delays on sending/receiving data. One usecase for this is to
> > > + * calibrate the data digital interface so we get the best results when
> > > + * transferring data. Note that @tap has no unit since the actual delay per tap
> > > + * is very backend specific. Hence, frontend devices typically should go through
> > > + * an array of @taps (the size of that array should typically match the size of
> > > + * calibration points on the frontend device) and call this API.
> > > + *
> > > + * RETURNS:
> > > + * 0 on success, negative error number on failure.
> > > + */
> > > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > > +			    unsigned int tap)  
> > 
> > taps maybe given it's a number of them?  
> 
> yeps...
> 
> > Is this an industry standard term - sounds like it probably is but my
> > google fu is failing.
> >   
> 
> Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx IDELAY
> control you may found something. I could not find a good name (originally I just had
> 'delay' but without a proper unit it felt weird), so I admit I used the one it made
> more sense for my specific usecase. Open to suggestions though :).

Taps is fine.


> > >  
> > > +/* vendor specific from 32 */
> > > +enum iio_backend_test_pattern {
> > > +	/* modified prbs9 */
> > > +	IIO_BACKEND_ADI_PRBS_9A = 32,  
> > 
> > Not knowing anything much about this, does it make sense to use an enum,
> > or should we face facts that we can't have a true generic interface
> > and just use a suitably sized int?
> >   
> 
> I'm also not a fan of the above but we do have generic/standard patterns in this core
> (and that could be used by others):
> 
> - 0x0: pn9a (device specific, modified pn9)
> - 0x1: pn23a (device specific, modified pn23)
> - 0x4: pn7 (standard O.150)
> - 0x5: pn15 (standard O.150)
> - 0x6: pn23 (standard O.150)
> - 0x7: pn31 (standard O.150)
> - 0x9: pnX (device specific, e.g. ad9361)
> - 0x0A: Nibble ramp (Device specific e.g. adrv9001)
> - 0x0B: 16 bit ramp 
> 
> Lucky enough the user we have for this is only using a custom/modified pattern. my
> issue with the int is that how do frontends know what value do they need to pass into
> the API? It would really be very backend specific. I know we do expect frontends to
> have some assumed knowledge on the backend they're connected too but I would like to
> avoid making those assumptions bigger than they need to be.
> 
> My expectation with the enum is that we can have some "contract" between backends and
> frontends on the pattern to use. I guess we could give it a try (unless you have some
> other idea) and if it starts going out of control, I can assume defeat and change it
> to an int.
> 
> Or, is the idea to just have the int parameter and some plain defines in the backend
> header?

Keep it as an enum for now and let's see where this goes.  Things called 
'modified' are always ominous.  Modified how?  The standard defined ones
are easier to argue for.


> 
> > How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> > or something like that.
> >   
> 
> Since this is on the input direction (and for our particular core), we don't have to
> unset it. When you choose a test pattern, it just tells the core to match for a
> specific signal/pattern. So when you do start getting "real" data, we may still have
> those status bits saying there are "errors" but in reality we don't care. We just
> care during the tuning/calibration procedure as we configure matching patters between
> frontend and backend...
> 
> OTOH for the axi-dac, for example, we do need to unset the test pattern. And we do
> that by (re)configuring the internal CW tone or the external data source (typically
> some DMA core).

Can we unset it for both input and output?  May make no difference, but easier to reason about
perhaps.

> 
> 
> > > +	IIO_BACKEND_TEST_PATTERN_MAX
> > > +};








[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux