STAR Firmware by JonnyC - Source Code and Explanation

This is for when someone programs a lot of different lights, and they can just set it and forget it and only worry about enabling or disabling turbo.

I’d think that someone who programs a lot of different lights would have a lot of different source files, I know I will :slight_smile: But I guess it’s down to personal preference…

Compile-time options are generally a good thing. They almost literally make the world go around.

Normally, it’s used so that a configuration script can auto-configure a program to build and run correctly based on what it detects about the host system, and compile-time options like that were probably part of the build process for 90% of the programs on your computer. The only real difference here is that it’s a human defining the parameters instead of a script.

Got mine in wight…just gotta solder some leads to it and give it a whirl :smiley:

Sweet, keep us posted. Looks like I’ll know whether my two bucks was a wasted investment long before I receive the product. ;~~)

I am changing the functionality of some of the features with the STAR 1.1 firmware and just want to make sure about what this line of code does:
PWM_LVL = modes[- -mode_idx];
This changes the PWM_LVL (output) to the level of the previous mode but does not actually change the mode itself. This means that if this line is executed while the light is on high, the output will switch to that of medium, but the light will still “think” it’s on high and remember high as current mode if switched off. Is that correct?

What are you wanting to do?

I want to understand what that line does exactly :slight_smile:

I am experimenting with different behavior for low voltage step down and critical volt shut of, such as not blinking “off” but blinking with the same PWM_LVL of the mode it will switch to after the warning flash, and am also contemplating weather I want the driver to remember this as the new mode, or keep original mode in memory but running PWM_LVL as the previous lower mode. To me it looks like that line just changes the output and not the mode, but I am no programmer so I just wanted to ask as I don’t have a good testing facility set up in order to test this myself… yet…

As my question is more about programming syntax than the actual firmware, disregard it. I have software developers at work, I will ask one of them.

Your understanding is correct.

Yup, that's just for Turbo. It's so that when you turn it off and back on it will switch back to Turbo, and not the previous level that it stepped down to.

EDIT: Oops, and yes, it does the same thing for the low voltage ramp down. You're correct in that it doesn't save the mode, that only happens when store_mode_idx is called.

Note you can also use this line to modify the turbo step down to a specific PWM value instead of the previous mode, simply replace PWM_LVL = modes[- -mode_idx]; with PWM_LVL = 180 (desired PWM value). That way you can have it step down to a level of your choice but still act normal and still be able to be bumped back up to turbo with a single click.

I use it like this in my small lights that need a large step down to not overheat.

Yes, I saw that JW980 had done that in his code.

Has anyone written a simple delay command, like a long loop? I’ve noticed that including util/delay.h and using the _delay_ms() function eats a lot of memory. I’m running out of space and would think that a simple loop that doesn’t do much could achieve the same result using less memory. It would be tricky to time accurately but exact timing on warning and cut off blinks does not really matter to me. Any one tried it?

I’ve got a temporary setup with an adjustable DC supply and have hooked up the wires for easy access, so I’ve tried the loop myself now and it works. Doing this saves me plenty of bytes which I probably won’t need in my “final” firmware version, but now during the testing stage I can cramp more of my own code into the hex file for more extensive testing.

The below loop within a loop within a loop within a loop takes very roughly about one second (had to remove leading tabs as they don’t seem to work very well here):

A = 0;
B = 0;
C = 0;
D = 0;
while(A < 14) {
while(B < 255 ){
while(C < 255){
while (D < 255){
}
D = 0;
}
C = 0;
}
B = 0;
}

This can of coarse be made into a routine that can be called on with an argument which is then calculated to resemble milliseconds, meaning the entire _delay_ms and delay.h can be made entirely obsolete for us that do not require precise millisecond accurate timing and need a little more space for code.

When using the internal xtal I seriously doubt that _delay_ms / delay.h were accurate anyway.

Oh yeah…I like the turbo timeout, this way you can use very high power FETS or numerous 7135’s drive it hard (barn burner) for a few seconds and have it auto throttle back to not overheat, it’s a very good firmware :slight_smile:

You know the problem with doing loops like that, right? It keeps the CPU active and running at near-full power, so it eats up the battery faster. It might be better to set an interrupt for some number of cycles in the future and then go to sleep, waking whenever the interrupt goes off.

You’re probably right about there being a better way. It sounds like he’s just doing it to institute brief pauses (blinks, etc) - and the CPU draws single digit milliamps. So I think it should work fine for his purpose.

I’m still interested in what you described though, do you have a link to info on setting that up for AVR’s?

No, I simply skimmed through the attiny13a reference manual, and I’ve done similar things in the past in DOS and in Linux. Looks like it works in a very similar manner… assign an interrupt handler, set the parameters for a timer, start it, then execute the instruction to put the main CPU into standby mode. It’ll then idle at low power until something wakes it up, such as the timer popping and calling the interrupt you assigned.

I am into learning so I might look into it, but my intended purpose with this loop is only to replace the _delay_ms() function. To me it looks like _delay_ms() is just a more complicated and byte heavier way of doing exactly what my loop does, also without using interrupts. My loop should therefor not be eating up much more CPU power than the _delay_ms() function that it’s replacing.
Or have I missed something? Is _delay_ms() using interrupts?

And besides, in the main() function of the STAR 1.1 source the while(1){} loop runs voltage monitoring (unless disabled), so the CPU is constantly doing stuff anyway. If using interrupts is optimizing for CPU load, the entire voltage monitoring function should be activated by an interrupt instead of being run in a while(1){} loop.