On Mon, 27 Feb 2023 11:54:59 +0200 Matti Vaittinen <mazziesaccount@xxxxxxxxx> wrote: > On 2/27/23 09:22, Matti Vaittinen wrote: > > On 2/26/23 19:30, Jonathan Cameron wrote: > >> On Sat, 18 Feb 2023 20:08:10 +0200 > >> Matti Vaittinen <mazziesaccount@xxxxxxxxx> wrote: > >> > >>> Thanks a lot Jonathan, > >>> > >>> You have been super helpful :) Thanks! > >>> > >>> On 2/18/23 19:20, Jonathan Cameron wrote: > >> Hmm. There is another approach that I'd not thought of in this case > >> because > >> in my head integration time is more continuous than it is for this > >> part and > >> that is to fiddle the _raw values (we do this for oversampling or SAR > >> ADCs > >> where things tend to be powers of 2). The trick is to shift the raw > >> value > >> always so that the 'scale' due to (in this case) integration time remains > >> constant. That separates the two controls completely. > > > > Holy cow! That's a neat trick which I didn't think of! > > > > Basically, we could do >> 1 for the data when time is 100 mS, >> 2 when > > 200 mS and >> 3 when 400 mS. We would want to use 19-bit channel values > > then. > > Please ignore my previous mail. It seems I am once again not knowing > what I am talking about. If we take this approach, we shift << 3 when > int time is 55, << 2 for 100 and << 1 for 200. With 400 mS we would not > shift. Spot on. > > >> However, I'm not sure that makes sense here where the thing we typically > >> want to change when scaling due to saturation is integration time. > > > > That's a bit problematic, yes. We could "fool" the user by doing the > > saturation check in driver, and then just returning the max value of all > > 19-bits set if the saturation is detected. This, however, would yield > > raw values that are slightly off. OTOH, with max sift of 3 bits that's > > only 7 'raw ticks' - which I hope is acceptable. I hope the user will > > then be switching to shorter integration time and start getting correct > > readings. > > > > It's slightly sad to say "good bye" to the gain-time-scale helpers but I > > guess you just helped me to solve this with a _really_ simple way. We > > can keep those helpers in "back pocket" for the day when we need them ;) > > > > I will see what comes out of this idea - thanks for the help again! > > > > But as you surely knew from the start, the saturation problems kick in > with the 'non maximum sifts' when the _highest_ bits never get set. Yes, thats what we'd expect to see as we can only measure high light levels if the integration time is short. > There the 'saturation detection' would cause a huge jump by suddenly > setting the high bits. So, yes - this does not seem like a feasible > option here :/ Yes, there is no consistent value for saturation if you are changing the integration time as the real light levels that cause saturation are dependent on the integration time. > > /me feels stupid... > > Sorry for the noise! No problem. It's interesting to understand where the limitations on some of these techniques lie and I hadn't thought about the issue of saturation as previous times we've done this have typically been on ADCs doing oversampling or similar where we don't get the same problem. Jonathan > > --Matti >