On 2/2/23 18:57, Jonathan Cameron wrote: > On Tue, 31 Jan 2023 09:31:53 +0000 > "Vaittinen, Matti" <Matti.Vaittinen@xxxxxxxxxxxxxxxxx> wrote: > >> On 1/30/23 15:02, Jonathan Cameron wrote: >>> On Mon, 30 Jan 2023 14:04:53 +0200 >>> Matti Vaittinen <mazziesaccount@xxxxxxxxx> wrote: >> >>>> However, the values >>>> spilled from raw IIO_INTENSITY channels will change when integration >>>> time is changed. So, should I use the info_mask_shared_by_type = >>>> BIT(IIO_CHAN_INFO_INT_TIME) for IIO_INTENSITY channels? >>> >>> Ah. This problem. The mixture of two things that effectively map to scaling >>> of raw channels. As _scale must be applied by userspace to the _raw channel >>> that has to reflect both the result of integration time and a front end amplifier >>> and is the control typical userspace expects to use to vary the sensitivity. >>> >>> That makes it messy because it's not always totally obvious whether, when >>> trying to increase sensitivity, it is better to increase sample time or gain. >>> Usually you do sample time first as that tends to reduce noise and for light >>> sensors we rarely need particular quick answers. >>> >>> So in the interests of keeping things easy to understand for userspace code >>> you would need to provide writeable _scale that then attempts to find the >>> best combination of amplifier gain and sampling time. >> >> There is (at least) one more thing which I just noticed when I continued >> writing the code. >> >> Changing the integration time impacts all intensity channels. > > Oh yuck. > >> So, scale >> will be adjusted for all channels when a request to set scale for one >> channel causes integration time to change. (Gain on the other hand is >> adjustable separately for each channel.) Do you think a typical >> user-space application can cope with this? > > Sensor is slow anyway, so how about updating it before a read > if the value is different from requested for the current channel? I think this could be doable. The bu27034 has no IRQ and no buffer so I plan to only support the raw_read() and raw_write(). The driver could cache requested integration-time values for each channel and set them to HW before reading data from each channel. I am not really a fan of caching the values - but this could simplify the integration time setting as the new time would not necessarily need to suit all channels. I will think of this - but current version I drafted does not support it. >> I am unsure if I should just use the biggest integration time (400mS) by >> default and only decrease this when very small amplification is >> requested by setting scale > 1. TBH, I don't like this. It prevents >> having shorter measurement times with gains greater than 1x - and I >> believe that users may want to have higher gains also in cases where >> they wish quicker measurements from the sensor. > > Possibly shorter times might be desired, though uncommon for people > to care a lot about read speed on a light sensor. Light levels > usually change slowly (assuming no one is doing anything exciting > with the sensor) > >> >> Some other options I am considering: >> 1. Skip time config for now - easiest but does not give full usability > > Not great - for same reason as below (saturation) > >> 2. Allow setting the time via devicetree at probe time - slightly better >> but not very dynamic. > > Don't do that one. Usual reason to want to set this stuff is to avoid > saturation of the sensor. > >> 3. Custom device-specific sysfs file for setting the time so >> specifically tailored userland apps have access to it - adding new ABI >> for this is probably not something we prefer ;) > > Indeed - new ABI is normally unused ABI > >> 4. Allow setting the integration time in situations where the driver can >> internally hide the scale change by changing the gain as well - this >> operation is not atomic in the HW and adds some extra complexity to the >> driver. Also, this fails for configurations where the gain setting is >> such it can't compensate the scale change. I started with the option 4) and wrote first 100% untested code draft. I don't yet have the hardware at my hands so testing and fine-tuning needs to wait for a while. I wonder if I should send out an RFC to explain what I was up-to, or if I should wait for the hardware, do testing and only send out the patches when I believe things work as intended :) Sending RFC early-on would perhaps be beneficial for me as it can help me avoid working/finalizing a solution that will not be appreciated ;) Yet, sending out and RFC with untested code may result reviewers (who skip the "small print" explaining the code is untested and to be reviewed for a concept only) to do unnecessary work. And in my experience, no letters are large enough that all reviewers didn't skip the resulting "small print" :) > > So we have had a few examples that might apply here (can't remember if > any of them actually got implemented). > > My aim would be to have the normal ABI do the right thing, but... > If we were to provide extra ABI that allows switching out of a default > mode then we can assume the code using it knows what it is doing. Hm. Maybe it's because all the virtual waffles and beer (last weekend was FOSDEM weekend - but I was participating only remotely) - but I am not 100% sure I know what you mean by the default mode. > So maybe something like > > auto_integration_time_from_scale that defaults to 1/true. > When 0/false, > integration_time becomes writeable. If you write integration time > it would then affect scale appropriately and any change to scale > would be clamped within whatever range is possible for the integration > time provided. Hm. Does this mean that writing the integration time would 1) Change integration time in HW 2) Not change gain in HW 3) Change integration time and scale values in sysfs? My current attempt if to not provide the 'mode' change flag - but provide R/W integration time and R/W scale. Regarding the scale: When scale is changed by user, the driver attempts to maintain integration time and perform only the gain change. If requested scale can not be supported by any available gain using the current integration time, then the driver attempts to change both the gain and integration time to achieve requested scale. (If this also fails, then an error is returned). I guess this is what is expected to happen in "normal mode". As I mentioned earlier, this does not allow a great control over the integration time for users who (may?) wish to have shorter time with gain bigger than 1x. Hence the writeable integration time. Now, an user may request different integration time by writing the integration time. I assumed this is also normal operation assuming this does not cause a scale change? Hence, my current attempt is to compensate the caused scale change by adjusting also gain. It appears to me (based on the equation from my HW colleagues) that the doubled integration time also doubles the reported intensity value. I think this is also expected. For example: Let's assume a case where we originally have gain 8x, int-time 400 mS user wants to set new int-time 200 mS. Halving the measurement time would mean halving the reported intensity values, causeing the scale to change, right? Now, in order to maintain the scale, the driver would be not only dropping the integration time but also internally doubling the gain from 8x to 16x. > > If you want to write an auto tuning routine you could do that and mostly > avoid the problem. > We have a few drivers doing that IIRC. Hm. Yes, it must be the virtual waffles :) I am unsure what you mean by autotuning. Do you mean increasing the gain/integration time in driver based on the raw-data to ensure values are as accurate as possible w/o being saturated? The only signal userspace > normally cares about is illuminance and hopefully the calculations > take the different settings into account. I never much liked > the cases we have of this but there are a few e.g. gp2ap020a00f > In my view that should be a userspace problem as the algorithms to > tune this could be quite complex. > This definitely sounds like a computation better done in user-space to me. I had hard enough time writing a simple lux-calculation in my driver resulting bunch of overflow-checks and do_div()s... Luckily the performance was not a priority. >> >> I would be grateful for any suggestions :) >> >> Finally, the BU27034 allows pretty big gains - from 1x to 4096x. After a >> quick look in existing light sensor drivers - I think the available >> scales for IIO_INTENSITY channels are usually from 1.0 downwards. ("1.0 >> 0.xxx 0.yyy 0.zzz"). 4096x (or 32768x if we take the max measurement >> time into account) will cause some loss of accuracy even with NANO scale >> if we use scale range starting from 1.0. > > *sigh* someone implemented an insane gain because they happened to be able > to. I'd imagine that adds so much noise to be effectively pointless. > That dynamic range seems unlikely to be of much practical use. I can't say anything to this as I don't have the sensor yet :) >> >> As a side note - I had always thought measuring the light is just simple >> value reading from a sensor. I never regarded the fact that human eye >> sees only certain wavelengths or that the sensors sensitivity changes >> depending on the wavelength. It's funny how I always end up knowing less >> when I know more ;) > > Light sensors are a pain in many ways! We've had a few datasheets over the > years that have insisted that the color channels have well specified units > despite there being no such definition that I'm aware of (and no means > to define one for cheap sensors. What is standard Red? :)) All the > illuminance values are just approximations to the official curve. > > Sometime in the past we debated trying to describe the curves, but it's > hard to do as they aren't nice shapes (so 3DB points don't make much > sense). This is easy to agree after only a very brief look on the topic :) Yours, -- Matti -- Matti Vaittinen Linux kernel developer at ROHM Semiconductors Oulu Finland ~~ When things go utterly wrong vim users can always type :help! ~~