STAR Firmware by JonnyC - Source Code and Explanation

Sweet, keep us posted. Looks like I’ll know whether my two bucks was a wasted investment long before I receive the product. ;~~)

I am changing the functionality of some of the features with the STAR 1.1 firmware and just want to make sure about what this line of code does:
PWM_LVL = modes[- -mode_idx];
This changes the PWM_LVL (output) to the level of the previous mode but does not actually change the mode itself. This means that if this line is executed while the light is on high, the output will switch to that of medium, but the light will still “think” it’s on high and remember high as current mode if switched off. Is that correct?

What are you wanting to do?

I want to understand what that line does exactly :slight_smile:

I am experimenting with different behavior for low voltage step down and critical volt shut of, such as not blinking “off” but blinking with the same PWM_LVL of the mode it will switch to after the warning flash, and am also contemplating weather I want the driver to remember this as the new mode, or keep original mode in memory but running PWM_LVL as the previous lower mode. To me it looks like that line just changes the output and not the mode, but I am no programmer so I just wanted to ask as I don’t have a good testing facility set up in order to test this myself… yet…

As my question is more about programming syntax than the actual firmware, disregard it. I have software developers at work, I will ask one of them.

Your understanding is correct.

Yup, that's just for Turbo. It's so that when you turn it off and back on it will switch back to Turbo, and not the previous level that it stepped down to.

EDIT: Oops, and yes, it does the same thing for the low voltage ramp down. You're correct in that it doesn't save the mode, that only happens when store_mode_idx is called.

Note you can also use this line to modify the turbo step down to a specific PWM value instead of the previous mode, simply replace PWM_LVL = modes[- -mode_idx]; with PWM_LVL = 180 (desired PWM value). That way you can have it step down to a level of your choice but still act normal and still be able to be bumped back up to turbo with a single click.

I use it like this in my small lights that need a large step down to not overheat.

Yes, I saw that JW980 had done that in his code.

Has anyone written a simple delay command, like a long loop? I’ve noticed that including util/delay.h and using the _delay_ms() function eats a lot of memory. I’m running out of space and would think that a simple loop that doesn’t do much could achieve the same result using less memory. It would be tricky to time accurately but exact timing on warning and cut off blinks does not really matter to me. Any one tried it?

I’ve got a temporary setup with an adjustable DC supply and have hooked up the wires for easy access, so I’ve tried the loop myself now and it works. Doing this saves me plenty of bytes which I probably won’t need in my “final” firmware version, but now during the testing stage I can cramp more of my own code into the hex file for more extensive testing.

The below loop within a loop within a loop within a loop takes very roughly about one second (had to remove leading tabs as they don’t seem to work very well here):

A = 0;
B = 0;
C = 0;
D = 0;
while(A < 14) {
while(B < 255 ){
while(C < 255){
while (D < 255){
}
D = 0;
}
C = 0;
}
B = 0;
}

This can of coarse be made into a routine that can be called on with an argument which is then calculated to resemble milliseconds, meaning the entire _delay_ms and delay.h can be made entirely obsolete for us that do not require precise millisecond accurate timing and need a little more space for code.

When using the internal xtal I seriously doubt that _delay_ms / delay.h were accurate anyway.

Oh yeah…I like the turbo timeout, this way you can use very high power FETS or numerous 7135’s drive it hard (barn burner) for a few seconds and have it auto throttle back to not overheat, it’s a very good firmware :slight_smile:

You know the problem with doing loops like that, right? It keeps the CPU active and running at near-full power, so it eats up the battery faster. It might be better to set an interrupt for some number of cycles in the future and then go to sleep, waking whenever the interrupt goes off.

You’re probably right about there being a better way. It sounds like he’s just doing it to institute brief pauses (blinks, etc) - and the CPU draws single digit milliamps. So I think it should work fine for his purpose.

I’m still interested in what you described though, do you have a link to info on setting that up for AVR’s?

No, I simply skimmed through the attiny13a reference manual, and I’ve done similar things in the past in DOS and in Linux. Looks like it works in a very similar manner… assign an interrupt handler, set the parameters for a timer, start it, then execute the instruction to put the main CPU into standby mode. It’ll then idle at low power until something wakes it up, such as the timer popping and calling the interrupt you assigned.

I am into learning so I might look into it, but my intended purpose with this loop is only to replace the _delay_ms() function. To me it looks like _delay_ms() is just a more complicated and byte heavier way of doing exactly what my loop does, also without using interrupts. My loop should therefor not be eating up much more CPU power than the _delay_ms() function that it’s replacing.
Or have I missed something? Is _delay_ms() using interrupts?

And besides, in the main() function of the STAR 1.1 source the while(1){} loop runs voltage monitoring (unless disabled), so the CPU is constantly doing stuff anyway. If using interrupts is optimizing for CPU load, the entire voltage monitoring function should be activated by an interrupt instead of being run in a while(1){} loop.

If you look at the end of the while(1) loop, you'll see a "sleep_mode();" command which will go to sleep and wait for the WDT interrupt. I didn't put it in there for power consumption though, just so that the ADC low voltage detection doesn't trigger immediately (it has to detect low voltage 8 times over 4 seconds, if I read my code right).

Ahh, I see. I missed that one. I took a look at sleep_mode() and can’t make sense of what it does… at all.

But that takes me back to the _delay_ms() function. It doesn’t appear to use any interrupts. I can’t really say for sure but the code certainly has none of the mumbo jumbo that’s in sleep_mode(). To me _delay_ms() appears to be just a bunch of loops waiting for increments until they are done. Can anyone verify? Is there anything in _delay_ms() that suggests it would use less CPU load than a simple loop?

I have refined my loop a bit: (using “_” to space out the lines to make it more resemble C code)

uint8_t A = 0;
uint16_t B = 0;
uint8_t C = 0;
uint8_t X = 0;

while(A < X) {
_B = 0;
_while(B < 25850){
__C = 0;
__while(C == 0){} // Odd! Having nothing here makes total loop extremely fast. Adding C loop slows it down, but it does not matter how many times C loop is run, total time is same.
_}
}

X is about 0.1 seconds, so entering 10 into this argument gives about 1 second delay.

I compiled a hex file with a bunch of modes and timers that has five instances of my delay and compared it to the same code but with the _delay_ms() instead, and the difference was 249 bytes. Considering the limited space for my tests I think it’s quite useful, as it has been the difference of having a too large hex file and not.

I haven’t actually gotten started writing and flashing attiny firmware yet. I just figured, as long as the chip has dedicated hardware for offloading wait loops, it might be worth using. However, that would only work if that interrupt and counter aren’t already being used for something else, and it doesn’t guarantee the CPU won’t be awakened early by some other interrupt.

I’ll know more after I really have a chance to dive in and start mucking with things. I’m planning to do some stuff with variable-speed slow strobes and underclocking and clock dividers, so timing will get a bit tricky.

I agree, but it appears that the _delay_ms() routine that comes with the package doesn’t use the it, so by replacing it with my loop I will not make anything worse.

It could perhaps get better with what you suggest but re-writing the existing _ms_delay() function to incorporate interrupts is way over my head. I just needed a few more bytes and, as it appears to me, I didn’t cause any increased CPU load.

Edit: If you have a look at the delay.h (which I assume comes along with AVR Studio) I think you’ll see what I mean. I’m doing the same thing as that routine, just less byte intensive, that’s all.