Flashlight Firmware Repository

Fyi, Halo asked me if I was disabling the AtoD upon sleep mode in my Narsil version, and noticed I wasn't (same as STAR Momentary). So I added turning OFF the AtoD during power sleep mode and it cuts parasitic drain in half -- big gain for a small easy mod in the e-switch firmware we commonly use.

So, the STAR Momentary and Ferrero-Rocher both do not have this disabling of AtoD from what I can see, unless there's something I'm missing. Post #789 here has the sleep_mode code. Post #790 here has the result of using 10X values for the R1 and R3 - reduces parasitic drain by a factor of 10. I think this resistor mod needs more testing and review, because it sounds technically it's out-of-spec of the Atmel MCU's, and I'm not sure it can be used with 13A's.

It’s not a hard spec like the allowed VCC or frequency range (and even these can be stretched). What I could find in the 13A’s doc is:

The way I read this is that you can use larger resistors but the ADC will not run as quickly, which we don’t care about at all.

Ahh, that's interesting... So sounds like a pretty good solution for us. If reducing parasitic drain is important to you, then for our e-switch based lights with our custom firmware, this option of using 220K and 47K resistors is the way to go, combined with modding the code to turn off AtoD during sleep mode.

In my tests, I measured from 0.314 mA originally, then with the mods down to 0.016 mA, about a 95% drop.

I am also not sure how much the MCU reading the ADC will affect things when you are using resistors that are that high of value.

I haven't tested values this high myself on the drivers so this is just a guess, but I am predicting much lower precision overall if you go to 10x higher values.

Also, remember that higher values affect MCU turnoff times when you cut the power. This won't matter much for momentary only, but for any clicky setup it is something to keep in mind. It may not be an issue, and you may be able to compensate for it in other ways, but it is something to keep in mind.

All I did was simple tests of a cell at 3.6v and one at 4.1v and my firmware blinked out the correct voltage. Need to do a lot more testing, but I've never done calibration to begin with. I've found accuracy though to be pretty good, within 0.1v for the most part using the tables TK had, think from Dale's measurements.

Edit:

Did more tests of various cells, various levels:

Voltage Level Blinking

Actual Voltage (DMM)

18650 cell

4-1

4.12v

AWT 2500

3-5

3.505v

HE4

2-8

2.811v

Sam 15M

4-2

4.21v

30Q

4-2

4.18v

MJ1

4-1

4.14v

MJ1

2-8

2.776v

15M

2-7

2.725v

15M

2-7

2.691v

15M

It's about as good as any of my other e-switch lights.

Maybe calibration is more important when using higher values. With the calibration and using 10 bit values from the ADC I have very good accuracy. Without calibration it was good enough until I bumped into that MCU with a lower internal reference voltage.

As I have E-switch and off time cap on the same pin as the voltage divider I perhaps pay more attention to what is going on. I wrote a debugging routine that blinks out X.XX volts and I want it to be as accurate as it can be. I use real voltage values for off time measuring so the off times are calibrated with the voltage calibration. It was easier for me to see what happens with off times when checking with full and depleted cells, during different temperatures and how the voltage monitoring and off time cap charging behaved during E-switch presses and so on.

I guess for “normal” use you want need calibration, but when using these three functions on the same pin I got much better consistency if I calibrated the internal reference voltage, at least for MCUs that have unusually low/high internal reference voltage.

I think the plan is to also go with a new driver layout which adds an explicit OTC drain resistor, and probably a bigger OTC. That should help eliminate dependence on voltage divider parasitic drain.

Or use the higher-value resistors on boards which don’t care about OTC, like the fancy lighted tailcaps. If it can sleep at 0.01mA instead of 0.30mA, that’s a huge bonus for tailcap use.

So… you know how I’ve complained about not knowing how to support Windows? And how it takes pages of explanations with several screenshots to show how to do something that would only take a few short commands on a unix system? Apparently this sort of thing is so common that Microsoft decided to support unix instead of expecting unix tools to support Windows.

Windows 10 is getting the ability to natively run unmodified versions of Ubuntu. In today’s “insider preview” version, it already works.

This means that, even in Windows, you should be able to do flashlight firmware development with a few simple commands. For example, to get the tools, compile something, and flash a driver, you should be able to do something like…

  • Enable the new “Windows Subsystem for Linux” feature.
  • Click the ‘bash’ icon, or start a cmd.exe shell and type ‘bash’.
  • In that shell, run a few commands:
    • apt-get install bzr gcc-avr avr-libc binutils-avr avrdude
    • bzr branch lp:flashlight-firmware
    • cd flashlight-firmware/ToyKeeper/blf-a6
    • …/…/bin/build.sh blf-a6
    • …/…/bin/flash.sh blf-a6.hex

You don’t even have to type some of it, because pressing <Tab> will auto-complete a lot of things, and the <Up> key recalls previous commands. After the first time it’s easier since you already have it set up (assuming Windows saves the state instead of installing a fresh OS each time).

It’s only a preview so far, but once it’s a little more mature I hope this will make things easier for everyone.

I was just reading about the Windows Subsystem for Linux on Arstechnica Why microsoft needed to make windows run linux software. Still, it feels crazy.

It’s been around a long time to run VM’s, either a shareware, or full up 3rd party sources. Pretty much everyone in development does this now - I’m not sure I know a developer who doesn’t run multiple OS’s on their computers via VM’s. I would think most servers are now running under VM’s as well - VMWare dominates the market there. Both my clients converted to VMWare a few years back, and pretty much all other shops I’ve heard of with servers to manage have done the same. There’s just so many advantages to it.

I have a Win 7 desktop for my primary development @work, but with Oracle VM VirtualBox, I run Win XP when I need to run old dev tools. For testing, we have VM’s of a whole bunch of Win versions (XP/Win7, Win8’s, Win10 in 32/64 bit versions), including International versions. Same thing can be done for various Unix/Linux/etc VM’s.

I’m not sure if extending Win10 to have the native support is that much of an advantage, maybe some advantages.

Ish. There’s a definite push to have lots of little servers instead of one big one, and to make those little servers independent of the hardware. But instead of fully virtual machines, it’s a lot more efficient to use containers (LXC, chroot, bsd jails, etc). Isolation instead of emulation. And preferably with automatic deployment and load scaling. This often gets grouped under the label of “cloud” computing, using popular tools like OpenStack.

The thing Microsoft just did is to make it possible to run an Ubuntu container under Windows. This probably required a lot of kernel-level API glue code, to translate Linux system calls into Windows system calls, but otherwise should be relatively simple.

Virtual machine-based solutions are still popular, using VMware and Xen and similar tech, but after peaking in 2009 it seems to be declining now in favor of lightweight isolation. This isn’t the best representation, but here’s a trend graph from Google.

I am very interested in trying this… but kinda hate to on my main system since it is all beta… I will give it a little time and will try it!

I have no idea if it’ll actually work (the bash-on-Windows stuff) because I don’t have a Windows computer to try it on. However, it’s supposed to be pretty close to the real thing.

In particular, I’m really unsure if the avrdude step will work, because that requires Microsoft to have all the USB access stuff wrapped properly… and that’s not an easy thing to do. They may have done just networking and task management, since that’s all it would need to get basic web development tools working.

Hi All,

Does anyone have theory as to why I can no longer get atmel 6.2 to recognize my programmer? When I click into the programming box there was always an option that said AVRISP MKII. Now it only says “simulator”

When I disconnected and reconnect the box it makes the noise you would expect. In device manager windows claims its working properly.

Any ideas would be helpful :slight_smile:

A little help here… I’m running windows 10 64 bits and have been trying to get my USBASP recognized by either AVRdude and eXtreme burner AVR and nothing…

I did the necessary steps to get the drivers correctly installed, it already appears as USB ASP device in device manager.

This is my programmer and has a jumper to select 3.3V and 5V, I suppose it goes in the 3.3V slot?

Thanks!

I don’t know how to get it recognized in Windows, but you probably want it configured for 5V.

I used Hoop’s how-to and got a positive test in AVRdude on the BLF attiny13A
How to flash

However I downloaded the latest software, and can’t find the attiny13A in the latest atmel studio. Think I’ll have to downgrade.
But the drivers for the USBasp and AVRdude work fine on my windows 10 system.

Hope this helps…

will34,

This is the ultimate, best source of AVR support/drivers/etc. I know of. I used this for full 8.1 support, think also for Win 10: http://www.protostack.com/accessories/usbasp-avr-programmer. Download and use the latest there. Mine works for the 5v position, not the 3.3v.

I bought my first USBasp from Protostack, then bought my 2nd from FastTech - both worked perfect. One died on my recently, so just ordered another from FT, as I always like having a backup on-hand.

Very Important: Pin #4 on the USBASP V2.0 is not ground - it's TXD. I wire grnd to pin #10 now and it solved several problems I had - originally I followed the pinout description in flashlightwiki.com and it sort of/semi worked, and it's wrong for the newer V2.0 dongles.

I updated this page: http://flashlightwiki.com/AVR_Drivers, to show the proper wiring for USBASP V2.0

Edit: Hoop thread's OP showing the wiring is wrong - I've proved this, did all the testing to prove it. Pin #4 on the dongle is not ground. It may work, it may appear to work - it's wrong. Using pin #10 for grnd is much more stable and reliable I've found.

I wired mine like this, following WarHawk-AVG’s guide:

Labeling is wrong, but the wiring appears correct .