Attiny25/45/85 FW Development Thread

Ahh, ok. Think I kind of/sort of understand. The 1.1v source is always a true 1.1v, but the reference source to read it is Vcc, used as the comparison for measuring 1.1v, so what you read will vary as the Vcc reference source varies. The resolution is less because you are reading 1.1v instead of a true batt+ level, I suppose, so almost 4X less resolution.

I’m actually still a little unclear on how this works Halo. Previously I suppose we were doing “single ended” measurements where we compared the voltage from our hardware voltage divider against a 1.1v bandgap. You’ve proposed measuring the 1.1v bandgap against Vcc (~4v for example).

I think I’m following you now Tom E. Are we thinking that 4v might yield a value of ~281/1023 and 3v might yield a value of ~375/1023? That doesn’t seem like a very large delta, but let’s stretch this out.

4.2v - 268
4.1v - 274
4v - 281
3.9v - 289
3.8v - 296
3.7v - 304
3.6v - 313
3.5v - 322
3.4v - 331
3.3v - 341
3.2v - 352
3.1v - 363
3v - 375
2.9v - 388
2.8v - 402
2.7v - 417
2.6v - 433
2.5v - 450

That actually looks much better. For our purposes we generally care about something like 4.2v through 2.8v I believe, so we’ve got about 134 measurement points in there. This is approximately double the resolution we normally have since we normally get 8-bit measurements ranging from the low hundreds to the high hundreds (120 through 190 or something like that).

I actually have / had some example code. I'm looking for it.

~ edit ~

  long readVcc() {
  ADMUX = _BV(MUX3) | _BV(MUX2);   // Set reference to Vcc and measurement to internal 1.1V bandgap

delay(2); // Wait for Vref to settle
ADCSRA |= _BV(ADSC); // Start conversion
while (bit_is_set(ADCSRA,ADSC)); // measuring

uint8_t low = ADCL; // must read ADCL first, it then locks ADCH
uint8_t high = ADCH; // unlocks both

long result = (high<<8) | low;
//Note: might need to discard first result

result = 1125300L / result; // Calculate Vcc in millivolts; 1125300 = 1.110231000
return result;
}

Awesome Halo…. So dump the voltage divider and free up a pin? Sounds great. Would need of to factor for voltage drop if using a reverse polarity protection diode. Guess a voltage divider and pin would still be needed if using 2S or more cells. Especially, if using a voltage regulator chip to power the MCU.

Sweet mod up in Post 755 Tom E.

I think this Halo method for measuring voltage would have a big advantage for e-switch lights in parasitic drain. I could be off on this, not sure, but using V = I * R, would the 4.7K and 22K resistors we use result in about a 0.157 mA drain by itself? This was probably discussed somewhere in the past I would think. Can anyone confirm, theoretical or otherwise? I’m measuring about 0.300 mA parasitic drain on a few of my e-switch lights, while a NC MH20 (0.027 mA) and SWM C20C (0.009 mA) are much lower.

For single cell momentary applications if we eliminate both resistors and the polarity protection diode we should have enough space to use a mosfet for polarity protection. This would eliminate almost 100% of the voltage drop we currently see while using a diode for polarity protection.

EDIT: At least with the older boards. The “new stuff” like A17DD-L v30+ is still in flux and already packed super tight, but I’d certainly like to implement a PCB to use this technique for momentary stuff.

I’d like to note that I didn’t come up with it. Heard about it, went looking for an example, then half forgot about.
I heard about it in relation to a different AVR originally. But saw that it’s also available for attiny25/45/85. Not an option with tiny13 though. This was before attiny25/45/85 came into use.

I am trying to imagine what a PID might be. As I understand it, there is a heat sensor either on the driver board or on the LED’s star. If it’s on the driver, then there is a delay and a steady state reduction in temperature from the LED to the driver, and these might be more or less than the delay and the steady reduction from the LED to the flashlight body. There is also heat originating in the FET, but without or with a small gate resistor that is small. So I don’t see how to do much better than regulating the current to keep a constant sensor temperature. One will have to try it to decide on that temperature, but is should be somewhere around the desired flashlight body temperature. It will be cooler when you hold it or the wind blows than not, but not by a huge amount because that will cool the driver through the star and increase the current.
The LED temperature will overshoot because of the delay and have a higher steady temperature than the driver, but maybe it can take that. I don’t see a possibility for doing a lot better than just a simple thermostat.
On the other hand, you do need also to know how much to reduce the current on the first step, so the light output doesn’t vary too much. After that things will change more slowly.
Also, the gain of the thermostat can’t be too high, or with the delay, it will oscillate, turning the light off and on. That is, the table or whatever has to be over some range of temperature.

Hhmm - I should really try measuring the sleep mode amps on a board without the diode or resistors - fairly easy to do… Dunno if anyone did that before.

Double posting this here as well: https://budgetlightforum.com/t/-/25032

I did the tests. First removed the diode, bridged the pad with solder, and saw absolutely no difference in the parasitic drain at all - zero, nothing. Still doing in the 0.30 to 0.32 mA range.

Then I pulled the 22K resistor. The light acted kind of weird, because I didn't re-burn the MCU with firmware not supporting LVP. Anyway, it would settle down, and go into its sleep state. There I measured in the 0.16 mA range, which is about exactly what I expected - it went to half the draw, as calculated from the resistor values.

So, not sure why the diode had no effect or did I mis-understand if it was supposed to help?

Btw, in terms of draw over time, a 0.16 mA steady draw would drain a cell 1 Ah in 260 days, if I did the math correct.

The diode is in series with the MCU. Bridging it cannot improve the parasitic drain. If anything that gives a slightly higher voltage to the MCU, which may increase the drain ever so slightly. If there was a diode+drain related discussion it may have been about the zener diode used on 6 V drivers? That arrangement forms a type of shunt regulator, with a huge parasitic drain by definition.

Still a mystery where the other 0.160 mA is going. Your fuse values indicate that you are not using BOD, so that is good.
Maybe some MCU peripherals are not shutting down properly during sleep?

Thanks DEL - yep, that’s what I’m thinking - something else that’s not getting shut down. Spoke last night to my EE buddy here and he said the same thing bout the diode we are using. He also mentioned these diodes will consume voltage though when the MCU is drawing power, so I think I need to know exactly what the draw is. Using the “85” and not the “85v” rated at 2.7v minimum, means you can’t afford to loose much voltage via the diode. Think’n the drain was 0.1-0.2 volts from what I recall, which means our LVP needs to cut off no less than 2.9v. I believe the loss’s caused by the diode is why we moved the voltage divider resistors before the diode, and are using a 22K instead of the 19.1K.

Haven’t had a chance yet, but I need to dive deep into the Atmel 25/45/85 specs to see if we are missing something on sleep mode. I am aware that you have control of several sub-systems for sleep/low power.
Yep, I heard bout the brown-out taking amps in sleep mode - the details were posted here a while back.

You can assume a 0.25 V drop for the Schottky-type diodes we are using (or try to measure it while the MCU is running, it does vary with current and temperature.)

There is a discussion in one of the threads to replace D1 with a ‘boot-strapped’ PMOS FET - this would give practically zero voltage loss.

I don’t think it’s necessary to replace the diode to reduce the draw, it’s just that with an “85”, you have to be extra careful with your LVP cutoff value. The 2.7v is actually 2.95v then, therefore, your cutoff point should be 3.0v or 3.1v for a little cushion. When I bought my ATTiny85’s from Mouser or DigiKey, I bought the 85V’s but the last batch I bought from RMM are the “85”’s.

What driver are you using? I always take the cell voltage measurement before the diode. I thought that’s what everyone else did too.

wight’s FET+1 and MntE’s FET+1 board, reflowed myself. Cell voltage before the diode? Not sure I understand - I’m just measuring at the same place I always measure amps - at the tail: between batt- and the edge of the battery tube.

A tiny85 won’t drop dead at 2.69v. Atmel just does not guarantee they will remain reliable outside the specs. The official specs are often conservative. People can and do run them a bit outside of specs and have no problems. Some even sell devices that use AVRs running outside of specs. I believe, iirc boards from jeelabs are (or were) running at a clock speed + voltage that is outside of spec and he said he never ran into a problem even among hundreds of chips.

Also what speed are you running at? You can increase reliability by running at a lower clock speed (like 4Mhz) + you’ll get a bit of power savings. I posted awhile ago, somewhere :smiley: probably this thread, code to use the clock prescaler to adjust the speed beyond the options that the fuses give you.

~ edit ~
Yep, using our nifty new “search within thread” feature it comes up by searching “clkpr”.

Apparently, I was talking to some guy named Tom E in the above quote. So, I guess it’s nothing new to you? :-x

I forget far more than I remember. It’s dejavu all over again. I recall reading this and maybe other posts about it, but it’s untested, and I thought incomplete at the time. There’s 2 timed things I need - the 16 msec timer (or need to know exactly what it is), and the built-in _delay_ms() routine, so any change that impacts these is a problem. I don’t have any decent development/debug capabilities with these parts, and not much time to explore these things with unknown advantages? I am looking to lower sleep mode parasitic drain, and don’t see the relationship between processor speed and parasitic drain. Sorry, I haven’t had time to do the crazy hours of research on this.

Don’t get me wrong - all good ideas, but don’t ask me to go off and take the risk of time for a possible benefit, which I’m not clear on. I am very handcuffed with this firmware development environment from what I’m used to, lacking single step level debugging, profiling, etc., so experimenting is very time consuming.

I am very committed to 8 Mhz right now because I know it works. I haven’t seen anything posted in detail, fully working, or fully explained and tested to work, at lower speeds. Again, no time for the R&D. Early on I tried a lower speed via the fuses, and timing of something was way off, even though I thought I made all the proper settings. This, of course, is a shame because others are doing this I’m pretty sure, but I just can’t find the source code and/or fuse settings, etc., or they are unwilling/not allowed to post it.

I sometimes feel like I forget far more than I remember as well. Glad I’m not alone. :smiley:

ToyKeeper mentioned testing it here. Not sure how it effects _delay_ms() and the 16 msec timer you’re using. But it should be one of two possibilities, cuts it in half or no effect. About power savings, I don’t have any links off hand but I’ve also seen people test combinations of different levels of sleep with different clock speeds to verify what you get in the real world. I expect the savings to be minor. I brought it up more because you’re concerned about the tiny85 being rated at 2.7v minimum. Even with the tiny85v you’re not suppose to go below 2.7v if your running at 8MHz. 4Mhz is the max (officially) for the 1.8-2.7v range.

Are you disabling adc before power down sleep?
(ADCSRA &= ~(1<<ADEN)) // disable ADC (before power-off)

(ADCSRA |= (1<<ADEN)) // re-enable ADC

You’re not using the watchdog, right?