E-switch UI Development / FSM

I guess that battery sag could be fixed by blinking number of volts , reading voltage again and blinking 0-1 numbers for full volts and then the decimals. Careful not to blink 4.9.
Or better: blindly do 2 blinks and then the rest.

Might be easily fixable by a delay. I recall from DEL, there was a reason to skip the first reading... This was back in the day of the Q8 development.

The low pass filter we use is still subject to 1 off glitchy readings - it's not doing averaging of any kind.

This seems beyond my skills so far though…

It’s in the datasheet. If swapping between temp readings and reading VCC, then this applies: “The first ADC conversion result after switching voltage reference source may be inaccurate, and the user is advised to discard this result.”

Another thing to consider when reading MCU VCC as battery voltage is that the voltage drop over the diode is only the same if the current draw from the MCU is always the same. Alterations in current draw from MCU do lead to alterations in voltage drop. It’s different depending on diode of coarse, but the drop over the diode is not fixed, it is current dependent. I didn’t bother about it with the good old fashioned 7135 & FET drivers but it became much more noticeable when I started powering some components (like digi-pot) directly from the MCU pins. I went back to using a voltage divider.

Ahh - ok. I'll have to check Anduril again for support of this. Is it true then using the external voltage divider would solve all the flaky problems we see in voltage readings/reports?

It's a shame because with a FET+1 driver, pin #7 is a spare but we stayed away because of the extra amp draw in e-switch lights. I've used higher values for the voltage divider resistors to reduce the parasitic drain and it seemed to work well.

For me it was a no brainer with the divider because of current draw when turning digi-pots on/off and having 4 output channels on/off made the readings pretty bad. I had moved on from the 85 when I started doing these kind of things so pins wasn’t an issue.

I’m using the 3217 now so I definitely don’t have a shortage of pins. The negative side of my voltage divider does not go to GND, it goes to another pin so I can control it’s connection to GND. When shutting down on E-switch lights I just set to high impedance input and turn on the pull-up resistor, that entirely eliminates parasitic drain from the divider.

That's a nice feature, to eliminate the parasitic drain.

I wonder what would it take to merge that into the main source tree…
What kind of maintenance would be needed? I guess only adding cfg files?

No idea - I'm totally confused with that mgt UI. I use git at work every day, not on the web, but with a master repository.

I got base mods to Anduril itself (anduril.c), fsm-adc, setup files, and of course added the project and solution files for Atmel Studio.

Also added voltage reading calibration to the UI - 4 clicks std method.

Yes, I assume so. I didn’t even know it existed.

That would be helpful.

I don’t think it’s a lack of willingness though… more a lack of understanding how free software works and what is required of people who sell it. The license is easy to satisfy, but seem hard for people to understand when they have never encountered anything like it before.

Usually when I bring it up, manufacturers are just like “Oh, that one uses Anduril, and the other one uses NarsilM. You’re welcome.” … and then I’m like “Knowing the name of the firmware doesn’t help; what is required is a file containing the exact source code used on the product, and a way for everyone to get that file.” :frowning:

It’s also hard to get info about Astrolux products in general, since it’s basically a reseller brand name which gets put onto products from other companies. Some are from Mateminco, but there are other OEMs too.

Nope. It seems like, more often than not, companies make no attempt to read or understand the license. They put “Anduril” in the marketing materials to help promote the product, and sometimes they’ll add a “Thanks, ToyKeeper!” somewhere, but I don’t remember the last time a company actually satisfied the license without me bothering them repeatedly with detailed instructions about what was needed.

Thanks aren’t needed. Source code is needed.

Yes, and it’s used on the D4v2 titanium and brass models.

But the button light and the aux LEDs run in sync with each other. They don’t have separate independent modes. The button LEDs mirror whatever the main LEDs or aux LEDs are doing, depending on which of the two is active at the time.

And then there’s the Noctigon K1 RGB button LEDs, which sort of do a hybrid of both.

The aux LED / button LED code really is a mess though, and could stand to be completely rewritten to make it cleaner and more straightforward. It kind of just built up over time as projects needed it, without being explicitly designed.

That’s pretty cool.

It supports two different voltage measurement methods though, and I’m not really sure how to user-calibrate the other one.

The VCC pin method is simple and merely adds a constant fudge factor to adjust the response curve. However, the voltage divider method draws a line between two arbitrary points, and those points are anything from 0 to 1023. So that would be harder to do user calibration for.

Also, the UI doesn’t generally know or care which method is used. It could access the #defines to find out, but it’d need to implement two different voltage calibration modes and select one at compile time.

This is a good start, but there’s still more to do in order to make voltage calibration work on all supported devices.

It’s definitely possible for the cell voltage to change between readings, especially between the first and second, especially if the memorized brightness is high.

But there’s also another way it can change… the code wasn’t doing very good noise reduction. It was getting a pretty wide variety of different values for no reason other than electrical noise and sensor jitter.

Fortunately, I fixed that. Actually spent a lot of the past couple months fixing that, testing a bunch of different solutions, and finally settling on one which is simple but effective. And the code is now in the repository, as of like an hour ago.

Any variation in measurements now is almost certainly a real change in the signal, not an illusion caused by noise.

Yeah. I was getting a variety of bugs caused by race conditions, so I made the order of execution more explicit. Also, I’m still kinda hoping to implement a PWM-DSM hybrid algorithm to adjust brightness between PWM steps… and that requires very tight interrupt timing, so I needed to move as much code as possible out of interrupt handlers.

Now the race condition bugs are fixed (as of mid-November), and it’s theoretically ready for the PWM-DSM thing, if I ever get around to doing it.

Yeah, the first measurement is junk, and the attiny manual says to ignore that first value.

About averaging, I tried a bunch of different methods and algorithms for that, in hopes of getting a more stable signal and increasing the effective resolution. What I found was that it worked great while the light was at rest, but during actual use the data was too noisy. Even with 2048X oversampling, the signal was still noisy sometimes on hardware with high-amp cells and a FET.

So I eventually gave up on getting extra resolution, and instead focused on eliminating noise as much as possible.

It now samples continuously, with everything left-adjusted so it’s 16 bits… 10 bits of signal and 6 bits of noise reduction. It’s basically 10.6 fixed-point numbers. Each time it gets a new sample, it adjusts the running average up or down by 1, meaning it takes 64 samples in a row to move the needle by 1 full 10-bit unit.

This way, if it’s measuring a value of 100, the raw values may fluctuate randomly between 90 and 110, but the lowpassed value only fluctuates between ~99.95 and ~100.05. So it’s much more stable.

Then it adds 0.25 to that value and does a floor() operation, which makes it a very very steady value of exactly 100. Any change in this number represents a true change in the signal, rather than just random noise.

The value still updates quickly, since it samples a few thousand times per second. But the data it spits out is very, very stable now.

Sorry to post so many times in a row… I’ve been trying to fix the code for ADC, voltage, and thermal regulation for months now, and I finally got it working well.

0-8-15 User motivated a lot of this, since he did a lot of work to fix thermal regulation after I half-broke it in the mid-November update. (it worked, but was too slow and allowed high-power lights to get pretty hot) So he did a bunch of experiments and coding to make it respond better. He used 2560X oversampling with a new thermal algorithm, and got pretty good results on D4 and D4S.

When I tried it here though, I found it was still sometimes unstable and required extra tweaking to make it work on other hosts, and there were several details in the code I wanted to fix before merging. So that got me started down a rabbit hole which was deeper than I expected.

Pretty soon, I had gotten a bit lost in ADC signal processing algorithms and PID theory and a bunch of other stuff, and none of it wanted to work right.

Eventually though, I set all that aside and kinda started over, thinking through it from scratch one day in the shower. By the end of the shower, I had an idea worked out and a pretty detailed plan for the code. So I tried it, and … it worked. A couple small things needed tweaking, but overall it basically just worked.

The shower is the best place to write code. :stuck_out_tongue:

I’ve been testing it on a variety of different hosts for the past few days, and so far everything has been fairly close to an ideal response… even without tweaking anything per host.

So I merged it (and the K1 branch, while I was at it, since I wanted to test on K1 too) and finally uploaded the code today.

Testing would be helpful, to get some independent verification that it works.

Here are my results so far. Everything used mild fan cooling unless otherwise noted:
(small house fan about 1m away on its lowest setting, to make a gentle breeze)

D4v2:

D4Sv2:

BLF Q8:

Noctigon K1 W1:

FW3A 219B (test by Bob_McBob with no cooling):

FF E01:

FF PL47 G2:

MF01S: (graph is weird; the ceilingbounce app had some sort of bug during the test)
(also, this particular MF01S is a prototype which is known to have unstable output)

MF01-Mini: I tested both turbo and the ramp ceiling, because this thread indicated regulation problems at the ramp ceiling level. Here are the results for both. I let the second test run until the battery was empty, to see how the rest of the graph would look:


Prototype of an upcoming light:

The code is in the FSM branch on Launchpad, and I uploaded some new .hex files for testing too.

I’m not considering it stable until it has had more testing, but if no big issues are found I’ll probably mark it as stable and merge it into trunk.

Edit: Added MF01-Mini test results. Also, if anyone was wondering about those small but sharp-looking stairstep adjustments later on in the graphs, they’re not actually sharp. Here’s a close-up of one. It took 32 seconds to adjust output by about 3%, moving up one small step every 4 seconds:

Each of those 8 steps is the smallest adjustment possible using the given hardware’s PWM controls. The changes are not visible by eye during use.

Thanks TK for all the info. I'm definitely interested in the new algorithm you got there. Regarding the voltage divider option, yes - this is required for more than 1S battery setups, or 6V/12V LED output, etc. I recall DEL saying the voltage divider method is more reliable/accurate from one unit to the next. We had a back and forth conversation about this, not sure if it was posts on BLF or pm's/emails. I recall thinking - why did I ever drop the R1/R2 support? For example, standard Nanji drivers had R1/R2. But of course it was to further drop parasitic drain. Now, I use higher resistor values values for R1/R2 for 2S configurations.

In NarsilM, I implemented voltage divider support as well, but also used DEL's method for it. Just checked and I'm not doing any filtering on the voltage divider reading, but I'm using DEL's "crude low pass filter" on the internal reference source.

Have you seen or heard of a need to calibrate the R1/R2 voltage divider lights? Guess I could check the BLF GT threads.

TK, those results look great to me! Thanks so much for all the work you put into this.

Mike

I agree with the shower btw... The take away from those graphs is, wow, we got way too many amps for way too little heatsink. The K1 W1 is the only one doing decent.

The graph on the Q8 looks outstanding though - about perfect, settles down after 7 minutes to a nice sustainable level. Actually most of these all look great and match to the light so well, for example the D4 v.s DVS.

What's the stable temp set for on these, or can it even be defined that simply?

TK - I'm not sure why there's not more posts about this! The more I look at these graphs, the more impressed I am!!

You probably didn't/couldn't keep up but there has been complaining goin on about the thermal mgt in a few of the Anduril lights, but mostly the ones that apparently are not configured properly - no special Anduril configuration. The Astrolux MF01 Mini for example, but "man of light" came out with a nice, somewhat of a fix for it -- a copper heatsink. You should check it out here: https://budgetlightforum.com/t/-/59714. Whether this is doing the right thing or not, not sure (maybe just delaying the inevitable?), but it's helping.

Yes, it still seems to vary… especially on drivers mass-produced for cheap.

I mostly just decided I was okay with a precision of +/- ~0.15V, since it’s not easy to get a mass-produced attiny device to measure more accurately than that… but it would be nice to tighten up those tolerances or make it user-correctable.

TBH, I was surprised it worked. I didn’t even tweak the thermal parameters for each build target. I built in some configurable new knobs designed to be set differently for each model of light, but I didn’t end up needing to use them.

They were all set to the default temperature limit, which in most cases is 45 C. The user can raise that to get more light, but I wanted to do the testing with a relatively low limit because it makes regulation more difficult. The ambient temperature was about 18 to 20 C, and I used factory reset to calibrate the sensor as if it was 21 C.

The K1 has a default limit of 55 C, because most of its heat comes from the driver instead of the LED… and the driver can tolerate higher temperatures than the user’s hand. It has so much thermal mass that the outside of the light usually doesn’t get very warm during use, and most of the heat is concentrated inside at the driver.

I haven’t tested a MF01-Mini with this change yet. Will have to try that. I’m guessing it’ll probably behave similar to a D4S, which was one of the lights with the most thermal regulation problems in older versions.

You are using old MCUs. ATtiny85 datasheet specifies that internal voltage reference 1.1V can be 1V to 1.2V, same with the 1634. The limits of the same internal 1.1 vref in the newer 1-series 3217 is 1.078V to 1.122V. Already there is a huge gain in accuracy. Added bonus is that the internal temperature sensor is factory calibrated. I’ve not needed to calibrate a single one of my 3217 based drivers for either voltage or temperature.

The 85 is getting close to 15 years old. Maybe it’s time to move on?