Flashlight Firmware Repository

Hmm, interesting. So if I understand correctly, we could free up a pin and maybe remove two resistors on 1-cell tiny25/45/85 drivers? And this wouldn’t work on zener-style drivers because VCC is artificially capped?

In exchange for freeing up a pin and potentially removing a couple components, the voltage resolution would be decreased somewhat… but we’re already getting ~4.5 ADC units per 0.1V, so we have some resolution to spare right now. And it’s currently only using 8 of the 10 available bits of precision anyway.

Is this more or less correct?

Yep. I think the greatest benefit might just be gaining a bit more room on the 17mm single sided pcb. Since it sounds like we need to add components to get better stability. Small decoupling cap, FET gate resistor and/or pulldown.

Any takers? PM me. I’m offering $10 for Job 1 and $15 for Job 2 if they work. I think there should be a fair amount of crossover between the two.

I’ll do it freely if you’re not in a hurry (I have a busy week ahead) and are willing to accept that you might flash a few buggy revisions. I just got my AVR programmer working and it’ll be a change from trying to hack something from scratch, which I’m certain will involve spectacular failure at some point :slight_smile:

Part 1 seems pretty clear but about part 2, do you mean dual-pwm as in the STAR momentary, with ALT_MODES driving pin 5? Does that mean you also need PWM mode selection? I’ll post back in your driver thread once I get around to getting something done.

I’m happy to help as much as I can testing revisions, I just need someone’s help that understands this better than I do.

Both should have pwm options for pins 5 + 6 (like star momentary already does) plus toggle pin 3 for turbo and such. Really exactly what DEL did to blf-a6 for me, but versions for momentary and dual-switch lights

Dual-pwm was really overdue for STAR dual-switch anyways, but JohnnyC just didn’t see the demand for it I think

Fyi, Halo asked me if I was disabling the AtoD upon sleep mode in my Narsil version, and noticed I wasn't (same as STAR Momentary). So I added turning OFF the AtoD during power sleep mode and it cuts parasitic drain in half -- big gain for a small easy mod in the e-switch firmware we commonly use.

So, the STAR Momentary and Ferrero-Rocher both do not have this disabling of AtoD from what I can see, unless there's something I'm missing. Post #789 here has the sleep_mode code. Post #790 here has the result of using 10X values for the R1 and R3 - reduces parasitic drain by a factor of 10. I think this resistor mod needs more testing and review, because it sounds technically it's out-of-spec of the Atmel MCU's, and I'm not sure it can be used with 13A's.

It’s not a hard spec like the allowed VCC or frequency range (and even these can be stretched). What I could find in the 13A’s doc is:

The way I read this is that you can use larger resistors but the ADC will not run as quickly, which we don’t care about at all.

Ahh, that's interesting... So sounds like a pretty good solution for us. If reducing parasitic drain is important to you, then for our e-switch based lights with our custom firmware, this option of using 220K and 47K resistors is the way to go, combined with modding the code to turn off AtoD during sleep mode.

In my tests, I measured from 0.314 mA originally, then with the mods down to 0.016 mA, about a 95% drop.

I am also not sure how much the MCU reading the ADC will affect things when you are using resistors that are that high of value.

I haven't tested values this high myself on the drivers so this is just a guess, but I am predicting much lower precision overall if you go to 10x higher values.

Also, remember that higher values affect MCU turnoff times when you cut the power. This won't matter much for momentary only, but for any clicky setup it is something to keep in mind. It may not be an issue, and you may be able to compensate for it in other ways, but it is something to keep in mind.

All I did was simple tests of a cell at 3.6v and one at 4.1v and my firmware blinked out the correct voltage. Need to do a lot more testing, but I've never done calibration to begin with. I've found accuracy though to be pretty good, within 0.1v for the most part using the tables TK had, think from Dale's measurements.

Edit:

Did more tests of various cells, various levels:

Voltage Level Blinking

Actual Voltage (DMM)

18650 cell

4-1

4.12v

AWT 2500

3-5

3.505v

HE4

2-8

2.811v

Sam 15M

4-2

4.21v

30Q

4-2

4.18v

MJ1

4-1

4.14v

MJ1

2-8

2.776v

15M

2-7

2.725v

15M

2-7

2.691v

15M

It's about as good as any of my other e-switch lights.

Maybe calibration is more important when using higher values. With the calibration and using 10 bit values from the ADC I have very good accuracy. Without calibration it was good enough until I bumped into that MCU with a lower internal reference voltage.

As I have E-switch and off time cap on the same pin as the voltage divider I perhaps pay more attention to what is going on. I wrote a debugging routine that blinks out X.XX volts and I want it to be as accurate as it can be. I use real voltage values for off time measuring so the off times are calibrated with the voltage calibration. It was easier for me to see what happens with off times when checking with full and depleted cells, during different temperatures and how the voltage monitoring and off time cap charging behaved during E-switch presses and so on.

I guess for “normal” use you want need calibration, but when using these three functions on the same pin I got much better consistency if I calibrated the internal reference voltage, at least for MCUs that have unusually low/high internal reference voltage.

I think the plan is to also go with a new driver layout which adds an explicit OTC drain resistor, and probably a bigger OTC. That should help eliminate dependence on voltage divider parasitic drain.

Or use the higher-value resistors on boards which don’t care about OTC, like the fancy lighted tailcaps. If it can sleep at 0.01mA instead of 0.30mA, that’s a huge bonus for tailcap use.

So… you know how I’ve complained about not knowing how to support Windows? And how it takes pages of explanations with several screenshots to show how to do something that would only take a few short commands on a unix system? Apparently this sort of thing is so common that Microsoft decided to support unix instead of expecting unix tools to support Windows.

Windows 10 is getting the ability to natively run unmodified versions of Ubuntu. In today’s “insider preview” version, it already works.

This means that, even in Windows, you should be able to do flashlight firmware development with a few simple commands. For example, to get the tools, compile something, and flash a driver, you should be able to do something like…

  • Enable the new “Windows Subsystem for Linux” feature.
  • Click the ‘bash’ icon, or start a cmd.exe shell and type ‘bash’.
  • In that shell, run a few commands:
    • apt-get install bzr gcc-avr avr-libc binutils-avr avrdude
    • bzr branch lp:flashlight-firmware
    • cd flashlight-firmware/ToyKeeper/blf-a6
    • …/…/bin/build.sh blf-a6
    • …/…/bin/flash.sh blf-a6.hex

You don’t even have to type some of it, because pressing <Tab> will auto-complete a lot of things, and the <Up> key recalls previous commands. After the first time it’s easier since you already have it set up (assuming Windows saves the state instead of installing a fresh OS each time).

It’s only a preview so far, but once it’s a little more mature I hope this will make things easier for everyone.

I was just reading about the Windows Subsystem for Linux on Arstechnica Why microsoft needed to make windows run linux software. Still, it feels crazy.

It’s been around a long time to run VM’s, either a shareware, or full up 3rd party sources. Pretty much everyone in development does this now - I’m not sure I know a developer who doesn’t run multiple OS’s on their computers via VM’s. I would think most servers are now running under VM’s as well - VMWare dominates the market there. Both my clients converted to VMWare a few years back, and pretty much all other shops I’ve heard of with servers to manage have done the same. There’s just so many advantages to it.

I have a Win 7 desktop for my primary development @work, but with Oracle VM VirtualBox, I run Win XP when I need to run old dev tools. For testing, we have VM’s of a whole bunch of Win versions (XP/Win7, Win8’s, Win10 in 32/64 bit versions), including International versions. Same thing can be done for various Unix/Linux/etc VM’s.

I’m not sure if extending Win10 to have the native support is that much of an advantage, maybe some advantages.

Ish. There’s a definite push to have lots of little servers instead of one big one, and to make those little servers independent of the hardware. But instead of fully virtual machines, it’s a lot more efficient to use containers (LXC, chroot, bsd jails, etc). Isolation instead of emulation. And preferably with automatic deployment and load scaling. This often gets grouped under the label of “cloud” computing, using popular tools like OpenStack.

The thing Microsoft just did is to make it possible to run an Ubuntu container under Windows. This probably required a lot of kernel-level API glue code, to translate Linux system calls into Windows system calls, but otherwise should be relatively simple.

Virtual machine-based solutions are still popular, using VMware and Xen and similar tech, but after peaking in 2009 it seems to be declining now in favor of lightweight isolation. This isn’t the best representation, but here’s a trend graph from Google.

I am very interested in trying this… but kinda hate to on my main system since it is all beta… I will give it a little time and will try it!

I have no idea if it’ll actually work (the bash-on-Windows stuff) because I don’t have a Windows computer to try it on. However, it’s supposed to be pretty close to the real thing.

In particular, I’m really unsure if the avrdude step will work, because that requires Microsoft to have all the USB access stuff wrapped properly… and that’s not an easy thing to do. They may have done just networking and task management, since that’s all it would need to get basic web development tools working.

Hi All,

Does anyone have theory as to why I can no longer get atmel 6.2 to recognize my programmer? When I click into the programming box there was always an option that said AVRISP MKII. Now it only says “simulator”

When I disconnected and reconnect the box it makes the noise you would expect. In device manager windows claims its working properly.

Any ideas would be helpful :slight_smile:

A little help here… I’m running windows 10 64 bits and have been trying to get my USBASP recognized by either AVRdude and eXtreme burner AVR and nothing…

I did the necessary steps to get the drivers correctly installed, it already appears as USB ASP device in device manager.

This is my programmer and has a jumper to select 3.3V and 5V, I suppose it goes in the 3.3V slot?

Thanks!