Flashlight Firmware Repository

I don’t suppose there is a cheat sheet for which firmware will work on which drivers?

Ish.

The INDEX file has something vaguely like this. But it’s not per driver; it’s per hardware feature and per software feature.

Thanks for this. Would seem I need to learn more about hardware design.

If I was looking for a solid but reasonably priced light that could use to learn about firmware programming (by tweaking your code) and was fairly easy to recover in the event I did something wrong, what might you recommend?

The BLF Q8 or Sofirn Q8 are probably the best choices. Nice large board, you can access the MCU without de-soldering wires (usually), it has lighted switch LEDs (more things to tweak). The thermal config is probably more forgiving too.

Yep, for me that starts to transfer some data, but then reports a 502: Bad Gateway. At the moment, I am relying on a tethered phone for my Internet connection, so it’s possible that it actually doesn’t like my gateway for some reason. I have made failed lp: attempts from a standard WiFi connection, however, so this seems an unlikely culprit.

And the bzr Windows installer reports a 2012 release date (v2.6b1). Unless the dates on the Windows release page are typos, that’s a long time.

I’ve heard other developers complain about some aspects of Git, so you’re not alone.

I don’t think you should necessarily move, since you are the key maintainer. If BZR works for you, it seems counterproductive to ask you to use Git. A move should only be made if many contributors agree. Making such a move is inevitably an investment of time and energy that might be better spent elsewhere. On the other hand, moving to Git may be inevitable, anyway.

There is really only one thing that folks like myself need; a way to get the repo or parts of it without downloading individual files one at a time. Github, for example, typically offers a button on each project page which lets visitors download the current repo as a ZIP archive without any need to install Git. I assume that this feature is automated and requires only a one-time setup on Github, but am not certain.

Launchpad offers no such download for the flashlight-firmware repo. If such a feature existed and did not require upkeep from the maintainer, it would satisfy the needs of many visitors. UPDATE: It does; see next post.

Are you looking for this?
Browse the code -> view revision -> download tarball

Ugh this would have saved me so much time since I downloaded all the stuff manually instead of doing all this BZR VM Unix stuff for a one-off edit. Will bookmark these

Yes! 5 months ago I searched and searched for such a link. I suspected it would exist, but it was not obvious to me and I apparently didn’t notice it on the revisions pages, which I did open.

Even if I saw it, I may have assumed it was a patch to a prior revision :person_facepalming: , as that page contains the list of changes. In my mind, the tarball would contain only those changes. While that seems non-intuitive to me, it makes perfect sense once you understand that every revision is fully tarballed.

I guess only contributors need to figure out the launchpad system :wink: .

Thanks so much!

Yeah, it probably needs to happen eventually.

It’s just … designed to do exactly the things I don’t want by default. So it’ll be kind of an uncomfortable change. I’ve been using git lately for other projects, particularly ones with simple development models… but for the flashlight repository I often have a dozen branches all moving in parallel, each in its own directory. Git is weirdly not-good at that.

Thanks goodness it’s not just me.

The many-branches thing, or the difficulty with making Git handle that?

Sorry, I meant the awkwardness of branch management with Git. It made me feel better to hear a smart person make the same complaint :slight_smile: . This difficulty has trained me to minimize branching, which is a good or bad VCS trait depending on your point of view.

Sorry to ramble in your firmware thread, so I’ll shut up for now.

Warning: Long post, lots of complaining, totally safe to skip.

It seems like git has all the right tools, or at least most of the right tools, but its interface design (and resulting cultural norms) could use some work.

Normally the way I work is… branch off the trunk so I have a dev area to work in. Hack code there for as long as it takes. Sometimes this is an hour, sometimes it’s a few months. Sometimes I merge upstream changes into my branch along the way, especially if it’s a long-lived branch. Then when it’s all done and fully tested, merge it back into trunk. Typically I then move the old working tree into an “old” or “merged” directory, because it often has extra files in it which aren’t and shouldn’t be committed into the repository itself. For example, notes from clients, todo lists for that specific branch, intermediate calculations and scripts, measurements, IDE clutter, etc. I don’t want to delete those files, but I don’t want them in the actual repository either.

And there are often quite a few of these branches being developed simultaneously. I tend to have some shells, editors, and maybe other things open for each one, and I usually leave that stuff open until the branch is merged and ready to be archived.

Any time I need to compare branches, it’s trivial. Standard filesystem tools can be used… whatever tools I like. And the actual process of creating branches is simple too; simply copying a branch creates a new one. Every copy is its own branch, and its directory name is the branch name.

The default behavior in git is all wrong for this type of workflow. And more generally, the interface is a bit unintuitive.

For starters, that first step (creating a branch) isn’t done with the “branch” command. It’s “git checkout -b”. That’s simple enough though. The branch command isn’t for switching branches, it’s mostly used for listing branches and deleting old ones. The checkout command is used for creating branches and switching between them.

When it comes time to merge though, git’s default behavior is to not merge at all. Instead, it pretends the branch never happened, and rewrites history to make it look like all the commits happened on trunk (er, master). … and even though it’s default, it’s a behavior I literally never want. If I want to pretend history was linear, I’ll use the rebase command. (as an aside: Plus, after merging, if I delete the old branch to get it out of the list of active branches, there is no record that the branch ever happened. Even if there was an actual revision for the merge instead of fast-forwarding.)

So I set a global config option to make “merge” always use the “—no-ff” option. This tells it to do something sane by default instead of fast-forwarding to make it look like history was linear.

But then “git pull” breaks. Oops. Because “pull” is just an alias for “fetch” followed by “merge”, and “merge” has been told not to fast-forward. There is no first-class concept of updating the current branch to match its upstream counterpart; it’s implemented as two separate steps which don’t necessarily have quite the same meaning.

So git eventually implemented a workaround for that. And I put it in my global git config, to make it do something sane without extra options. I set pull to use “—ff” and “—ff-only”. So now it works again.

I’ve tried to override some other defaults too, like setting —no-commit by default during merge, because I want to make sure the tests pass before committing any merges. Merge, test, commit. But I haven’t found a way to make it do that yet. It tries really hard to enforce “merge, commit, test, fix, commit” instead of “merge, test, fix, commit”… and this tends to put broken revisions on the mainline, which is a big no-no.

Oh, and git has no concept of a mainline. So it can’t really tell which revisions were stable, well-tested parts of the trunk, and which were sloppy dev branch versions which are likely to have problems. This breaks the bisection tool, and makes it harder to read the history. In part because of this, it has become the cultural norm in git circles to make sure no one ever commits any broken revisions… even in dev branches. People are expected to do their development and then rewrite history afterward to make sure each individual step works correctly. It creates extra work which shouldn’t be necessary.

Anyway, there’s still the problem of git wanting to keep all the branches in the same directory in the filesystem. This is completely incompatible with my workflow. So I tried making copies for each branch with “git clone”. And then do work in the clones, doing things as normal… but then when it comes time to merge, I discover, oops, those clones don’t count as different branches. Different copies of a branch are treated as being still the same branch. But that’s not too difficult to work around. Instead of just doing a clone, do a clone followed by a “checkout -b” with the same name. Work in clone X, in branch X. Then when it’s time to merge, don’t go back into the original copy… merge clone X branch X into clone X branch master. And then the original copy can fetch updates from the clone and update its head pointer. Kind of awkward.

It’s a bit wasteful having the entire repository copied each time, but that’s okay. Normally I avoid this in bzr by doing “bzr init-repo” in the parent directory, so each branch effectively only has a new working tree without having to duplicate the history data. One parent dir with the metadata, many subdirs where each one is a branch. Pretty simple, straightforward, convenient, and reasonably disk-efficient.

Git finally added something similar, using the “worktree” command. I’ve only just discovered this though, and haven’t had a chance to see how well it works in practice.

The “worktree” feature wasn’t added until ten years after Git was first released. It should have been the default behavior, yet wasn’t available for an entire decade. And from what I’ve seen so far, it’s designed as sort of an afterthought so it’s still a bit awkward to use.

For example, it appears to not use a shared parent directory… instead, one sibling is the primary, and other siblings are kinda just linked back to the primary. Technically, they don’t even have to be siblings; the other working trees can be anywhere on the filesystem. If I understand correctly, the primary needs to know about all the secondaries, and the secondaries each need a link back to the primary. It also appears that one cannot create a branch of a branch this way; each one must be branched off the primary.

Regardless, it seems a lot less awkward than working in a bunch of completely independent clones. It’s just not as coherent or as well integrated as the default branching behavior in bzr. So I’m quite disappointed to see bzr being left to die, because I find it to be a better-designed DVCS tool.

Linus does good work with the kernel… but he really, really should have consulted some user interface designers and VCS / SCM experts during the early phases of creating git.

Oh, bless you TK. Again, your posts have made me feel a lot better as I’ve found Git highly non-intuitive. Though I don’t have anything like your workflow, I immediately recognize many of the issues you described. I’ve always assumed I was just too dumb to use the tool, even after reading Pro Git (which is likely out-of-date anyway).

The branch history and merge thing especially rings true for me, as I constantly find myself not using Git for certain things and instead working around it :person_facepalming: , which violates the purpose of a VCS. I will often make a new project in order to avoid committing things to the “precious” Git history of the mainline.

I’ll have to learn more about “worktree”.

Linus does things his own way, for better or worse.

I won’t go on about this further, but your long post made my day :slight_smile: .

Is there a particular reason people have traditionally gone with AVR MCUs for the flashlight driver boards? One had more experience with the TI msp430 family of chips from my work with IoT in college.
I was thinking it might be fun to try and build a driver board/firmware using that.

Probably because the old nanjg driver used attiny13a, and that’s where most of the open-source flashlight firmware development originated.

The PIC chips are also popular, but not well-supported yet in free flashlight software.

For the most part, it doesn’t seem to matter much which brand of chips are used, since most little MCUs have similar features and the cost difference isn’t large. So BLF-related projects have been using what BLF is familiar with and has code for.

I suppose it’s also that AVRs can be flashed with 2$ USBasp clones or the ubiquitous Arduino.
I don’t know if that has changed but for a long time you absolutely needed a PICkit for flashing PICs (or build your own parport dongle, and IIRC available software was meh, nothing like avrdude)

The LD-x4 looks like it has a PIC on it. Maybe in an effort to discourage us from hacking it :wink:

Many of us who develop firmware are not programmers. We started by copy paste “programming”, changing a little here and there to suite our needs. For me it started with Star v1.1 on the ATtiny13a. Then when we wanted to start adding stuff we knew little about, a few search words here and there helped us out. To get started with flashing we followed excellent guides here on BLF because we didn’t have anyone to help set us up. I think quite a few of us started with this stuff only because of those guides, they are what ignited the interest, at least they where for me.

Then when you’ve done enough programming to make your own firmware you are kind of committed to AVR, not because of the actual programming but because you have a setup for programming and flashing that works. I’m on the ATtiny1634, passed through the 841, 84, 85 and 13a, still using the same flash kit and software that I started with about 5 years ago. The 1634 is more advanced than the 13a, but programing it is basically the same, a few more registers with a few more options, that’s it. I am using the 3217 for a specific project which forced me to get new hardware for flashing but I’m still using the same development software, not a big change at all. Once again BLF provided me with everything I needed to know to get me on the 3217, I probably wouldn’t have looked at it if it wasn’t for yet another excellent guide.

When I started with this stuff, I had no idea what ADC, USART, SPI, WDT, TWI, USI and all that stuff meant. I was unable to make an educated MCU choice, and in terms of flashlight firmware, most of that stuff isn’t used anyway. I think that anyone who would argue which architecture is best for flashlight firmware is biased, essentially it doesn’t make a difference. If terms of actual programming and development, what we are doing is very simple stuff.

I love MSP430, so if you do make something, please share it with us! AVR is ubiquitous and cheap, two traits that are hard to beat :wink: .