Re: [PATCH v8 2/2] iio: adc: max14001: New driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 5, 2023 at 10:55 AM Jonathan Cameron
<Jonathan.Cameron@xxxxxxxxxx> wrote:
> > > From: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>
> > > Sent: Sunday, July 2, 2023 6:04 PM
> > > On Thu, 22 Jun 2023 22:32:27 +0800
> > > Kim Seer Paller <kimseer.paller@xxxxxxxxxx> wrote:

...

> > > > + /*
> > > > +  * Convert transmit buffer to big-endian format and reverse transmit
> > > > +  * buffer to align with the LSB-first input on SDI port.
> > > > +  */
> > > > + st->spi_tx_buffer =
> > > cpu_to_be16(bitrev16(FIELD_PREP(MAX14001_ADDR_MASK,
> > > > +                                                         reg_addr)));
> > > > +
> > > > + ret = spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers));
> > > > + if (ret)
> > > > +         return ret;
> > > > +
> > > > + /*
> > > > +  * Align received data from the receive buffer, reversing and reordering
> > > > +  * it to match the expected MSB-first format.
> > > > +  */
> > > > + *data = (__force u16)(be16_to_cpu(bitrev16(st->spi_rx_buffer))) &
> > > > +
> > >     MAX14001_DATA_MASK;
> > > > +
> > > These sequences still confuse me a lot because I'd expect the values in tx
> > > to have the opposite operations applied to those for rx and that's not the
> > > case.
> > >
> > > Let's take a le system.
> > > tx = cpu_to_be16(bitrev16(x))
> > >    = cpu_to_be16((__bitrev8(x & 0xff) << 8) | __bitrev8(x >> 8));
> > >    = __bitrev8(x & 0xff) | (__bitrev8(x >> 8) << 8)
> > > or swap all the bits in each byte, but don't swap the bytes.
> > >
> > > rx = cpu_to_be16(bitrev16(x))
> > >    = be16_to_cpu(((__bitrev8(x & 0xff) << 8) | __bitrev8(x >> 8)_
> > >    = __bitrev8(x & 0xff) | __bitrev(x >> 8)
> > >
> > > also swap all the bits in each byte, but don't swap the bytes.
> > >
> > > So it is the reverse because the bytes swaps unwind themselves somewhat.
> > > For a be system cpu_to_be16 etc are noop.
> > > tx = (__bitrev8(x & 0xff) << 8) | __bitrev8(x >> 8)
> > > rx = (__bitrev8(x & 0xff) << 8) | __bitrev8(x >> 8)
> > >
> > > So in this case swap all 16 bits.
> > >
> > > Now, given I'd expected them to be reversed for the tx vs rx case.
> > > E.g.
> > > tx = cpu_to_be16(bitrev16(x))
> > > As above.
> > > For rx, le host
> > > rx = bitrev16(be16_to_cpu(x))
> > >    = __bitrev8((x >> 8) & 0xff) << 8) |  __bitrev8((((x & 0xff) << 8) >> 8)
> > > same as above (if you swap the two terms I think.
> > >
> > > For be the be16_to_cpu is a noop again, so it's just bitrev16(x) as expected.
> > >
> > > Hence if I've understood this correctly you could reverse the terms so that
> > > it was 'obvious' you were doing the opposite for the tx term vs the rx one
> > > without making the slightest bit of difference....
> > >
> > > hmm. Might be worth doing simply to avoid questions.
> >
> > Thank you for your feedback. I have tested the modifications based on your
> > suggestions, taking the le system into account, and it appears that the code is
> > functioning correctly. Before sending the new patch version, I would like to
> > confirm if this aligns with your comments.

> Yes. This looks good to me.

I think the implementation is still incorrect. See below.

> > static int max14001_read(void *context, unsigned int reg_addr, unsigned int *data)
> > {
> >       struct max14001_state *st = context;
> >       int ret;
> >
> >       struct spi_transfer xfers[] = {
> >               {
> >                       .tx_buf = &st->spi_tx_buffer,
> >                       .len = sizeof(st->spi_tx_buffer),
> >                       .cs_change = 1,
> >               }, {
> >                       .rx_buf = &st->spi_rx_buffer,
> >                       .len = sizeof(st->spi_rx_buffer),
> >               },
> >       };

> >       st->spi_tx_buffer = cpu_to_be16(bitrev16(FIELD_PREP(MAX14001_ADDR_MASK, reg_addr)));

Here we got bits in CPU order, reversed them and converted to BE16.

> >       ret = spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers));
> >       if (ret)
> >               return ret;

> >       *data = cpu_to_be16(bitrev16(st->spi_rx_buffer));

Here we take __be16 response, reverse them and convert to BE16?!
This is weird. You should have be16_to_cpu() somewhere, not the opposite.

> >       return 0;
> > }

Isn't, btw, the reinvented spi_...write_then_read() (or what is it
called?) call?

> > static int max14001_write(void *context, unsigned int reg_addr, unsigned int data)
> > {
> >       struct max14001_state *st = context;
> >
> >       st->spi_tx_buffer = cpu_to_be16(bitrev16(
> >                               FIELD_PREP(MAX14001_ADDR_MASK, reg_addr) |
> >                               FIELD_PREP(MAX14001_SET_WRITE_BIT, 1) |
> >                               FIELD_PREP(MAX14001_DATA_MASK, data)));
> >
> >       return spi_write(st->spi, &st->spi_tx_buffer, sizeof(st->spi_tx_buffer));
> > }

-- 
With Best Regards,
Andy Shevchenko




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux