FW3A, a TLF/BLF EDC flashlight - SST-20 available, coupon codes public

In full turbo, and when approaching it, as the FET PWM ratio increases the 7135s contribute less and less to the current flow, so less heat is generated on the driver/MCU temp. sensor hence the increased lag.

A quick and dirty way to fix this might be to mount a power resistor as close as possible to the MCU, connected to the LED drive output. This way the parasitic loss in this heater resistor would provide the quick response needed for PID type strategies, compensating for the lower contribution from the 7135s.

If the driver was arranged so the resistor was only powered by the FET, not the 7135s, there would be no parasitic loss in 7135 modes, but that might require a second small FET just to drive it.

Edit:, or use a gate-able constant current source, shorted to ground, as the heater, rather than resistor+FET.

Otherwise the classic method is to use feed forward in addition to PID, but this requires a pretty good model of the overall behaviour and is not for the faint hearted.

Since we are not trying to thermally control it, simply to turn it down just before it overheats, I reckon some straightforward mapping of the behaviour could work.

Mapping could be attempted by taking a series of step response measurements using a thermocouple on the head and timing the duration from start temperature to critical temperature. Maybe map 8 or 16 different power levels, from full 7135 to full FET. These could then be used, factored by the head temperature measured at the MCU, to determine a step-down time for each power setting, the step down being either smoothed out, or left to step to give visual indication of what’s happening.

Once back down to full 7135 the universal algorithm taking over again.

PS: some more detail on how to use the LED Vf as it’s own temperature sensor:


PPS: You already have a constant current source on the driver (x1 7135) so if you could arrange a second set of wires to the LED (4 wire probe) feeding an ADC input to the MCU, you could do it, with just the tiniest flicker when taking the measurement. Guessing at 2mV/degreeC slope, you’d be looking to resolve say 200 mV over the range 0-100C, on the say 3V Vf, which sounds do-able.

How does Zebralight implement their thermal PID control? Because, it’s very response, doesn’t overshoot, and will quickly react both up and down to keep the light at the set thermal level. Granted, the brightest Zebralight is 2300 lumens, so not quite as bright as this light, but it’s still pretty powerful and needs to ramp down in a minute or so. It gets hot, but not too hot to hold.

My reference to “classroom theory” is because in this torch the temperature sensor is not closely coupled to the parts that are trying to be controlled. So it is a lot more difficult to do, and I am impressed by how well it works.

I regard what TK is doing as a pragmatic approach, using empirical methods, rather than a theoretical analysis.

Control engineering is quite a deep subject, I only studied it just enough to realise how little I understood.

See e.g.

to get an idea of some of the complexity.

No idea about Zebralights, but clearly they are doing something right. ISTR from posts here that DrJones has developed some impressive temperature control as well, but can’t find the details.

Edit: found DrJones’ work: H17F - programmable driver with full thermal regulation

PS: here is some less highbrow explanation, from the great Robert A. Pease, Applications Engineer for Nat. Semi back in the day, I’ve probably read every application note he ever wrote, required reading for any analogue engineer.


Oh man, that brings back memories. I used to subscribe to EDN just for Pease Porridge.


It was called EDN back then (Electronic Design News) and no I didn’t have to subscribe, in fact dealing with the semiconductor reps. and their latest stuff, and future plans, could have taken up all my time and got in the way of the job. But things were evolving so fast.

Evaluation samples were plied upon us, and meetings often took place down the pub (or posh restaurant) at lunchtime or after work, happy days.

And we all knew each other.

This would have to be a high precision differential measurement synchronized to the PWM cycle in order to get useful data.

You don’t need a diff-amp if you can sample both signals fast enough. But I’d suggest just clamping one to +V and sampling the switched signal, gets rid of the offset once you subtract cell voltage. It might be good enough. Of course sampled in-between regular operation, that’s why I suggested it could be done in a blink, in-between normal operation.

Just a mad idea though.

Edit: If this is doing without the voltage divider to measure Vbatt, as I suspect, then it could be a bit approximate, but measured Vbatt vs. LED Vf should still track if they are coming through the same pin to the same ADC.

One way to make that happen, put a large pullup fromVbatt onto the ADC pin. Measure Vbatt with everything else turned off (7135 and FET). Then pulse the LED with one 7135 for a few milliseconds, if necessary turning up the gain, measure, subtract, calculate, job done. All through one pin (possibly multiplexed with something else).

For best precision, take a slightly different two measurements, not Vbatt open circuit but Vbatt whilst driving the pulse (two pulses required)…

I’m fairly new around here and have learned a lot from BLF. I’ve since picked up a BLF A6, an Astrolux C8 (thanks to WalkIntoTheLight’s review crushing the Convoy C8) and a Massdrop/Lumintop Brass Tool AAA.

As for PWM, I’ve heard it’s used in some cheap or budget lights as a way of cutting costs. I don’t know much about it. But I do find it annoying on an older light I have.

Would someone please clarify:

a.) why the FW3A will use PWM and

b.) if/how its use of PWM will have minimal noticeable effect?

Thank you!

But you don’t find PWM annoying on the BLF-A6?

If you don’t mind PWM on the A6, you won’t mind it on the FW3A. They both use the same method, except the FW3A does it better. What people dislike is slow PWM, and that’s not what the FW3A does.

It’s about more than cost. A full current regulation circuit also requires more space and fancier heat sinking, and introduces a variety of other complications depending on how it’s done.

Thanks for clarifying. You’re right… It’s the slow PWM I don’t like. I don’t notice it on the A6 and am pleased it will be even better on the FW3A.

Most manufacturers now seem to be doing PWM the right way: fast. I still prefer current regulation, but you generally only get that on more expensive lights. I don’t find fast PWM annoying at all, it’s more for reasons of efficiency and well-regulated output that I prefer current regulation. For budget lights, it seems that a FET driver and PWM is the way they’re all done (for the high modes).

I think the FW3A shows more promise by using 8 (or is it 10?) 7135 chips. That should get much better regulated output on higher modes. Though, I still find the Convoys that use 8x7135 chips still suffer from dropping output as the battery voltage drops, well before I would have thought the voltage should start having an effect. It’s not as bad as pure FET, though.

What else can you expect, when driving a typical white LED from a LiIon cell with a crude driver ?

The voltage mis-match is great. Huge inefficiency when the cell is full, and it barely works as the cell discharges below e.g. 3V (lots of energy still left there by the way)

The efficiency is all over the place (7135 and anything linear obviously better, until they drop out), the FETs run open-loop on crude PWM at the worst efficiency, with no voltage compensation AFAIK.

There are far better ways of doing this, but they cost a tiny bit more, and take skill to design.

Boost, Buck, Buck-Boost. Choose a good one and you might double your cell life compared with a crude driver. Makes all the debate about “what is the best cell” a bit moot.

It is long past the time when 7135/FET drivers should have any credibility, no-matter how easy they are to design and cheap to manufacture.


It won’t be feasible to change the MCPCB design again before production.

The inner tube issue is already solved, I think, but we’re waiting on test results to be sure.


And probably more classroom theory than I’m using. When it oscillates, the oscillations are very regular, suggesting that it probably has a more formal design than what I’m using, or at least a much better signal-to-noise ratio on the sensor data. And probably more “I” with less “D”.

What I’m doing is a form of PID, but it’s less of a proper academic “Hogwarts/Brakebills” PID and more of a hedge witch “street” PID.

I have a H17F in a solid copper host with some serious thermal mass, and did some testing on its thermal response. This plot shows the H17F and an early version of Anduril from ~10 months ago. I found that DrJones’ method appeared to drop 1 PWM level every 0.5 seconds or so, until the FET was no longer active, then drop 1 PWM level every 2 seconds or so on the 7x7135 channel, until it was no longer overheating. It took about 8 minutes to stabilize, because the adjustment was very slow. Very smooth though, and it seemed okay in such a solid host. Bumps on its graph were where I accidentally moved the light while checking its surface temperature.

Meanwhile, Anduril stabilized in about a minute. These results are not directly comparable though, due to being in different hosts with different power levels. So at some point I should probably compare them in the same host to find out if the H17F can speed up when necessary.

On the FW3A, there was a spare pin so it actually has an “optic nerve” built in to be able to use the LED as a light sensor. But the way it’s designed probably wouldn’t work for temperature for a variety of reasons. Mostly, the reading is designed to auto-center on zero over time, so edges are visible but absolute levels are not.

The first prototype will likely soon become a dev host for optical sensor features. Those aren’t planned for release, but perhaps in a later version. Configuring a flashlight from a computer screen is a neat trick, but it’s mostly not very practical so I haven’t prioritized it.

People sometimes complain about linear drivers, but for the most part they work fine as long as the emitters match the power source. The main benefit of a buck/boost is being able to run mismatched voltages, like XHP35 on a single cell.

Drivers could certainly be optimized more, but it seems like a matter of diminishing returns. Fancier drivers provide benefits most people won’t really notice, and sometimes the extra complexity comes with significant baggage. Sometimes it’s worth the trouble, sometimes not.

On high or lower modes? The Convoy and similar drivers have an odd quirk that causes poor regulation on lower modes.

I built my first Convoy recently using an 8x7135 Qlite driver - pretty similar to the Convoy driver, but I have STAR firmware on mine. Out of curiosity, I tested the regulation with a linear power supply. Mine held regulation at ~3 Amps until the input level dropped below 3.4V when driving a 2.8-2.9V rated 219C. It dropped to 50% of initial current draw at 3.15V.

Considering Nichia’s datasheet shows the forward voltage is 3.4V at roughly that current level, this $5 driver seemed to have perfect regulation. There are caveats for voltage binning and the temperature dependence of Vf, but they don’t change the conclusion much.

Surprisingly, regulation was far worse on lower modes.

My 750 mA target (25%) mode fell out of regulation at 3.9V.
My 90 mA target (3%) mode was already out of regulation at 4.3V.

This wasn’t actually a surprise to me. I was looking for it specifically because when shopping for a driver, I noticed HKJ documented it over 4 years ago. He speculated based on oscilloscope data that the 7135 chips respond too slowly to the 16 kHz PWM to turn on fully when the duty cycle is low. Maukka measured similar when he did a more cursory output test of an S2+.

However, the D4 also uses a high frequency PWM for its 7135 chip, yet does not share this issue. I don’t know if it is due to the specific brand of 7135 used, a driver design issue, or if Toykeeper, TomE or others know a firmware trick to avoid it.

Please add me to the list (2)

All of the above. The FW3A uses a few methods to avoid that problem:

  • Chip brand: “Raptor claw” 7135 chip chosen specifically for its activation speed, so it will perform well with short pulses. The low-mode problems are mostly seen on “failboat” 7135 chips.
  • Driver design: 1x7135 chip on its own channel, so it uses the 350mA Vf instead of the 3A Vf, and can thus regulate longer.
  • Firmware: Slower PWM frequency at the lowest levels, like moon, to improve stability and reduce voltage sensitivity. Also reduces total power draw significantly, so moon runs about 3X longer than it would at full speed. (the D4 doesn’t do this, but the FW3A does)

So… regulation on this thing actually works pretty well.

Aha! Thanks for the reply. I spent far too much time researching this issue before I gave up trying to find answers and bought parts for my build.

I should have just created an account here earlier to ask, instead of digging through countless old threads.

But I’m glad I mentioned it in this thread, because now I know the concern is already addressed in the FW3A.

Unfortunately, it seems there are a lot of drivers out there using the lower quality chips.