It’s definitely possible for the cell voltage to change between readings, especially between the first and second, especially if the memorized brightness is high.
But there’s also another way it can change… the code wasn’t doing very good noise reduction. It was getting a pretty wide variety of different values for no reason other than electrical noise and sensor jitter.
Fortunately, I fixed that. Actually spent a lot of the past couple months fixing that, testing a bunch of different solutions, and finally settling on one which is simple but effective. And the code is now in the repository, as of like an hour ago.
Any variation in measurements now is almost certainly a real change in the signal, not an illusion caused by noise.
Her interrupt handler is 1 line of code in fsm-wdt.
Yeah. I was getting a variety of bugs caused by race conditions, so I made the order of execution more explicit. Also, I’m still kinda hoping to implement a PWM-DSM hybrid algorithm to adjust brightness between PWM steps… and that requires very tight interrupt timing, so I needed to move as much code as possible out of interrupt handlers.
Now the race condition bugs are fixed (as of mid-November), and it’s theoretically ready for the PWM-DSM thing, if I ever get around to doing it.
I recall from DEL, there was a reason to skip the first reading…
The low pass filter we use is still subject to 1 off glitchy readings - it’s not doing averaging of any kind.
Yeah, the first measurement is junk, and the attiny manual says to ignore that first value.
About averaging, I tried a bunch of different methods and algorithms for that, in hopes of getting a more stable signal and increasing the effective resolution. What I found was that it worked great while the light was at rest, but during actual use the data was too noisy. Even with 2048X oversampling, the signal was still noisy sometimes on hardware with high-amp cells and a FET.
So I eventually gave up on getting extra resolution, and instead focused on eliminating noise as much as possible.
It now samples continuously, with everything left-adjusted so it’s 16 bits… 10 bits of signal and 6 bits of noise reduction. It’s basically 10.6 fixed-point numbers. Each time it gets a new sample, it adjusts the running average up or down by 1, meaning it takes 64 samples in a row to move the needle by 1 full 10-bit unit.
This way, if it’s measuring a value of 100, the raw values may fluctuate randomly between 90 and 110, but the lowpassed value only fluctuates between ~99.95 and ~100.05. So it’s much more stable.
Then it adds 0.25 to that value and does a floor() operation, which makes it a very very steady value of exactly 100. Any change in this number represents a true change in the signal, rather than just random noise.
The value still updates quickly, since it samples a few thousand times per second. But the data it spits out is very, very stable now.