Flashlight Firmware Repository

Yes! 5 months ago I searched and searched for such a link. I suspected it would exist, but it was not obvious to me and I apparently didn’t notice it on the revisions pages, which I did open.

Even if I saw it, I may have assumed it was a patch to a prior revision :person_facepalming: , as that page contains the list of changes. In my mind, the tarball would contain only those changes. While that seems non-intuitive to me, it makes perfect sense once you understand that every revision is fully tarballed.

I guess only contributors need to figure out the launchpad system :wink: .

Thanks so much!

Yeah, it probably needs to happen eventually.

It’s just … designed to do exactly the things I don’t want by default. So it’ll be kind of an uncomfortable change. I’ve been using git lately for other projects, particularly ones with simple development models… but for the flashlight repository I often have a dozen branches all moving in parallel, each in its own directory. Git is weirdly not-good at that.

Thanks goodness it’s not just me.

The many-branches thing, or the difficulty with making Git handle that?

Sorry, I meant the awkwardness of branch management with Git. It made me feel better to hear a smart person make the same complaint :slight_smile: . This difficulty has trained me to minimize branching, which is a good or bad VCS trait depending on your point of view.

Sorry to ramble in your firmware thread, so I’ll shut up for now.

Warning: Long post, lots of complaining, totally safe to skip.

It seems like git has all the right tools, or at least most of the right tools, but its interface design (and resulting cultural norms) could use some work.

Normally the way I work is… branch off the trunk so I have a dev area to work in. Hack code there for as long as it takes. Sometimes this is an hour, sometimes it’s a few months. Sometimes I merge upstream changes into my branch along the way, especially if it’s a long-lived branch. Then when it’s all done and fully tested, merge it back into trunk. Typically I then move the old working tree into an “old” or “merged” directory, because it often has extra files in it which aren’t and shouldn’t be committed into the repository itself. For example, notes from clients, todo lists for that specific branch, intermediate calculations and scripts, measurements, IDE clutter, etc. I don’t want to delete those files, but I don’t want them in the actual repository either.

And there are often quite a few of these branches being developed simultaneously. I tend to have some shells, editors, and maybe other things open for each one, and I usually leave that stuff open until the branch is merged and ready to be archived.

Any time I need to compare branches, it’s trivial. Standard filesystem tools can be used… whatever tools I like. And the actual process of creating branches is simple too; simply copying a branch creates a new one. Every copy is its own branch, and its directory name is the branch name.

The default behavior in git is all wrong for this type of workflow. And more generally, the interface is a bit unintuitive.

For starters, that first step (creating a branch) isn’t done with the “branch” command. It’s “git checkout -b”. That’s simple enough though. The branch command isn’t for switching branches, it’s mostly used for listing branches and deleting old ones. The checkout command is used for creating branches and switching between them.

When it comes time to merge though, git’s default behavior is to not merge at all. Instead, it pretends the branch never happened, and rewrites history to make it look like all the commits happened on trunk (er, master). … and even though it’s default, it’s a behavior I literally never want. If I want to pretend history was linear, I’ll use the rebase command. (as an aside: Plus, after merging, if I delete the old branch to get it out of the list of active branches, there is no record that the branch ever happened. Even if there was an actual revision for the merge instead of fast-forwarding.)

So I set a global config option to make “merge” always use the “—no-ff” option. This tells it to do something sane by default instead of fast-forwarding to make it look like history was linear.

But then “git pull” breaks. Oops. Because “pull” is just an alias for “fetch” followed by “merge”, and “merge” has been told not to fast-forward. There is no first-class concept of updating the current branch to match its upstream counterpart; it’s implemented as two separate steps which don’t necessarily have quite the same meaning.

So git eventually implemented a workaround for that. And I put it in my global git config, to make it do something sane without extra options. I set pull to use “—ff” and “—ff-only”. So now it works again.

I’ve tried to override some other defaults too, like setting —no-commit by default during merge, because I want to make sure the tests pass before committing any merges. Merge, test, commit. But I haven’t found a way to make it do that yet. It tries really hard to enforce “merge, commit, test, fix, commit” instead of “merge, test, fix, commit”… and this tends to put broken revisions on the mainline, which is a big no-no.

Oh, and git has no concept of a mainline. So it can’t really tell which revisions were stable, well-tested parts of the trunk, and which were sloppy dev branch versions which are likely to have problems. This breaks the bisection tool, and makes it harder to read the history. In part because of this, it has become the cultural norm in git circles to make sure no one ever commits any broken revisions… even in dev branches. People are expected to do their development and then rewrite history afterward to make sure each individual step works correctly. It creates extra work which shouldn’t be necessary.

Anyway, there’s still the problem of git wanting to keep all the branches in the same directory in the filesystem. This is completely incompatible with my workflow. So I tried making copies for each branch with “git clone”. And then do work in the clones, doing things as normal… but then when it comes time to merge, I discover, oops, those clones don’t count as different branches. Different copies of a branch are treated as being still the same branch. But that’s not too difficult to work around. Instead of just doing a clone, do a clone followed by a “checkout -b” with the same name. Work in clone X, in branch X. Then when it’s time to merge, don’t go back into the original copy… merge clone X branch X into clone X branch master. And then the original copy can fetch updates from the clone and update its head pointer. Kind of awkward.

It’s a bit wasteful having the entire repository copied each time, but that’s okay. Normally I avoid this in bzr by doing “bzr init-repo” in the parent directory, so each branch effectively only has a new working tree without having to duplicate the history data. One parent dir with the metadata, many subdirs where each one is a branch. Pretty simple, straightforward, convenient, and reasonably disk-efficient.

Git finally added something similar, using the “worktree” command. I’ve only just discovered this though, and haven’t had a chance to see how well it works in practice.

The “worktree” feature wasn’t added until ten years after Git was first released. It should have been the default behavior, yet wasn’t available for an entire decade. And from what I’ve seen so far, it’s designed as sort of an afterthought so it’s still a bit awkward to use.

For example, it appears to not use a shared parent directory… instead, one sibling is the primary, and other siblings are kinda just linked back to the primary. Technically, they don’t even have to be siblings; the other working trees can be anywhere on the filesystem. If I understand correctly, the primary needs to know about all the secondaries, and the secondaries each need a link back to the primary. It also appears that one cannot create a branch of a branch this way; each one must be branched off the primary.

Regardless, it seems a lot less awkward than working in a bunch of completely independent clones. It’s just not as coherent or as well integrated as the default branching behavior in bzr. So I’m quite disappointed to see bzr being left to die, because I find it to be a better-designed DVCS tool.

Linus does good work with the kernel… but he really, really should have consulted some user interface designers and VCS / SCM experts during the early phases of creating git.

Oh, bless you TK. Again, your posts have made me feel a lot better as I’ve found Git highly non-intuitive. Though I don’t have anything like your workflow, I immediately recognize many of the issues you described. I’ve always assumed I was just too dumb to use the tool, even after reading Pro Git (which is likely out-of-date anyway).

The branch history and merge thing especially rings true for me, as I constantly find myself not using Git for certain things and instead working around it :person_facepalming: , which violates the purpose of a VCS. I will often make a new project in order to avoid committing things to the “precious” Git history of the mainline.

I’ll have to learn more about “worktree”.

Linus does things his own way, for better or worse.

I won’t go on about this further, but your long post made my day :slight_smile: .

Is there a particular reason people have traditionally gone with AVR MCUs for the flashlight driver boards? One had more experience with the TI msp430 family of chips from my work with IoT in college.
I was thinking it might be fun to try and build a driver board/firmware using that.

Probably because the old nanjg driver used attiny13a, and that’s where most of the open-source flashlight firmware development originated.

The PIC chips are also popular, but not well-supported yet in free flashlight software.

For the most part, it doesn’t seem to matter much which brand of chips are used, since most little MCUs have similar features and the cost difference isn’t large. So BLF-related projects have been using what BLF is familiar with and has code for.

I suppose it’s also that AVRs can be flashed with 2$ USBasp clones or the ubiquitous Arduino.
I don’t know if that has changed but for a long time you absolutely needed a PICkit for flashing PICs (or build your own parport dongle, and IIRC available software was meh, nothing like avrdude)

The LD-x4 looks like it has a PIC on it. Maybe in an effort to discourage us from hacking it :wink:

Many of us who develop firmware are not programmers. We started by copy paste “programming”, changing a little here and there to suite our needs. For me it started with Star v1.1 on the ATtiny13a. Then when we wanted to start adding stuff we knew little about, a few search words here and there helped us out. To get started with flashing we followed excellent guides here on BLF because we didn’t have anyone to help set us up. I think quite a few of us started with this stuff only because of those guides, they are what ignited the interest, at least they where for me.

Then when you’ve done enough programming to make your own firmware you are kind of committed to AVR, not because of the actual programming but because you have a setup for programming and flashing that works. I’m on the ATtiny1634, passed through the 841, 84, 85 and 13a, still using the same flash kit and software that I started with about 5 years ago. The 1634 is more advanced than the 13a, but programing it is basically the same, a few more registers with a few more options, that’s it. I am using the 3217 for a specific project which forced me to get new hardware for flashing but I’m still using the same development software, not a big change at all. Once again BLF provided me with everything I needed to know to get me on the 3217, I probably wouldn’t have looked at it if it wasn’t for yet another excellent guide.

When I started with this stuff, I had no idea what ADC, USART, SPI, WDT, TWI, USI and all that stuff meant. I was unable to make an educated MCU choice, and in terms of flashlight firmware, most of that stuff isn’t used anyway. I think that anyone who would argue which architecture is best for flashlight firmware is biased, essentially it doesn’t make a difference. If terms of actual programming and development, what we are doing is very simple stuff.

I love MSP430, so if you do make something, please share it with us! AVR is ubiquitous and cheap, two traits that are hard to beat :wink: .

Really…AVR is not ubiquitous.
It is on the West but not on the East.
Several times we’ve heard complaints from manufacturers that our drivers are hard to make due to the hardness with parts sourcing.

For this reason…AVR is not a perfect choice. But I think the community has too few developers to make supporting multiple architectures worthwhile. And I don’t think there’s a single architecture that would be better enough than AVR to be worth migrating.
There are quite a few interesting non-AVR MCUs out there. They could make our drivers smaller or cheaper or enable some fancy stuff. I’d love to see them supported. But I think that as of now the effort is just not worth it. Especially that porting is not everything. You need to maintain a port over the years. Maintaining is less time-intensive but over long period it adds up. And it’s far less rewarding than porting itself…

Thanks for the response, as it should be noted that ATtiny is not a “ubiquitous” choice for the Chinese companies who design and manufacturer our flashlights. Perhaps they would consider AVR and MSP430 to be equally foreign and strange to use in their drivers.

TI has a large presence in India, so perhaps MSP430 would be more common than AVR there, but I don’t think that’s true of China; they are certainly much more familiar with Chinese microcontrollers.

I mainly meant that AVR is far more common and supported than MSP430 within the computing and electronics communities that most BLF members are familiar with. The Western, English-language Internet is all I’m really able to know much about. From that perspective, the AVR product line is quite “ubiquitous”, even if many people only know about Arduino.

In the West, the word “Arduino” has typically been synonymous with a handful of ATmega products. That is changing as Arduino is ported to other architectures, but it was true for at least a decade. Chinese companies have installed millions of ATmega168/328’s onto generic Arduino modules, but those products were intended for export to the West. Chinese manufacturers didn’t use AVR chips in their products, but instead preinstalled bootloaders onto them for us to use.

To be fair, there also exists an MSP430 port of the Arduino IDE called “Energia”, but I’ll bet few people have even heard of it.

TK.
Strange behavior of Candle mode in Anduril.
After uploading to fw3a Anduril with some of the modes removed, the candle mode doesn’t work too “smoothly”.
Every now and then, the brightness does not change smoothly, it just jumps up or down. Once a few steps up, one time down. I didn’t have this problem on the stock firmware. I also checked on the D4 driver and one that I built myself. It’s the same on them. I didn’t change the fusebits. I just removed “blink at ramp middle”, “bike” and “tactical” strobe.
I recorded the movie. This appears in about 20s, 27s, 45s, 1m22s, 1m46s

TIP: How to set up an ATtiny dev environment on Fedora, RHEL, CentOS and derived:

some package names are different ...

$ sudo dnf install flex byacc bison gcc libusb libusb-devel glibc-devel
$ sudo dnf install avr-gcc avr-libc avr-binutils
$ sudo dnf install avrdude

:BEER:

- SAM -

Edit: other 'related' I've installed are: $ sudo dnf install avr-gdb gcc-c++ git patch wget texinfo zip unzip make bzr

Thanks. I added that to the repository’s README file.

I’m curious what dnf is though…

… *<i>looks it up</i>* …

Oh. As wikipedia says, “DNF or Dandified YUM is the next-generation version of the Yellowdog Updater, Modified (yum)”. I hadn’t heard about yup v3. The last one I heard about was yup v2.

Anyway, uh, funny story… or maybe an apology. I must apologize for Red Hat’s package manager. It was kind of an accident.

I was trying to get Yellow Dog to switch to Debian as its base system instead of Red Hat (because of a bunch of reasons)… but the owner of the company really didn’t want to, and didn’t seem to understand the differences except that Debian had a package manager while Red Hat did not.

So the next time I visited, I found that my friend there had started writing a clone of apt (Debian’s package manager), called yup (Yellowdog UPdater). And the company needed help to get its next release out, since it was already months overdue and nowhere near ready. So I ended up building an OS installer for them. My friend who started yup got fired though (another ridiculous story there), so I inherited it. I ended up having to finish it, at least enough for the first release. I even made a GUI/TUI front end for it. But I was kind of ashamed that it existed at all, and refused to put my name in its credits file.

Eventually we got the distro and its brand new package manager to a release-able state though, the entire dev team and their manager got fired (another ridiculous story), and the OS was released. Without a dev team though, not much happened with it afterward.

It should have died there. The next major release didn’t include any of that stuff; it just went back to being a port of Red Hat without any significant extras.

But someone revived it. They liked yup so much that they dug it up from its grave and took over maintenance, calling it “yum” or “Yellowdog Updated, Modified”. And it became part of Fedora. After most of a decade, it finally even had some of its larger bugs fixed.

Looks like it’s now having its third life in the form of “dnf” or “Dandified YUM”.

It really should just be using apt though. There was even a promising-looking rpm-compatible version of apt in the works for a while… but it seems to have died off because of yum. :frowning:

So, um, … sorry about that.

Interesting ... Thanks TK!

Just noticed I forgot to add "libusb"

sudo dnf install flex byacc bison gcc libusb libusb-devel glibc-devel

- SAM -

Ah. I know exactly what that is.

The candle mode is implemented as a 3-oscillator synthesizer, where the three oscillators are modulated by three other oscillators. The output is the sum of all three plus the user-configured base brightness.

The three oscillators run at three different frequency ranges, so it makes relatively complex patterns. But some parts, like the amplitude, are totally random. So, sometimes the oscillators have their amplitude set to zero, which makes it stop flickering.

What you showed there is how it looks when the slowest wave is running but the two faster waves are stalled at zero. Normally, with one or both of the faster waves going, it gives the appearance of having really smooth curves… but the resolution is actually pretty low, and the individual steps can be seen when it changes very slowly.

Additionally, it uses the ROM itself as one of the sources of erratic data, so the behavior changes slightly with each different version of Anduril. Change anything at all in the code, and it changes the ROM, so it slightly changes the flavor of all modes which use random values. It also changes based on temperature and battery voltage, though it’s mostly just using the noise from those readings, not the signal. Regardless, it behaves a little differently as the ambient temperature changes or as the battery drains.

But mostly I think you’re seeing the “bass frequency” wave by itself, without the higher frequencies mixed in.

For a more detailed look, I graphed the actual output over time. It shows the bass wave more clearly than it would normally be perceived by eye.

I’ve been tempted to do something fancy to smooth out the brightness curves more, using the “gradual adjustment” code it uses for thermal regulation… but actually doing it has proved more complicated than I hoped. It seems difficult to do without significantly increasing the code size. If completed though, it would allow the animation to hit PWM levels in-between ramp steps, and higher resolution would look smoother.