Emisar D18 introduction

Is the UI supposed to be able to find an output level that is sustainable or does it simply drop to a pre-determined output once thermal regulation kicks in?

I got my PL47 recently and it appears to be the latter. I used a temp gun to calibrate the temp properly, set the thermal reg to 50C and ran a test on turbo. My PL47 does indeed reduce output at 50C but that’s all it does and the output it changes to isn’t anywhere near sustainable. Rough numbers:
initial temp 24C
1:20m/s on turbo it reached 50C and stepped down
Rather than stabilize temperature continued to climb significantly
1:00 later on the step down output it was reaching 64C before I shut it off

I see no way through Anduril to choose the output level thermal reg drops too.

A slight turn-off delay is soooo much better than a double-click flash.

A slight turn-on delay is soooo much better than trying for moonlight and getting level 11 instead.

Please can you elaborate on this?

The latter would be thermal precaution, not regulation. And there’s not much science behind that. Almost as simple as timed stepdown.

When it’s at room temperature, does the temperature check mode blink out the correct temperature?

If the limit is set to 50 C, it should do that initial ramp-down before it actually hits 50 C. It triggers that as soon as the predicted temperature gets that high, not the current temperature. If it’s waiting until it actually hits the limit, something isn’t right.

Both, ish.

There are three sections of the ramp:

  • Top / Paranoid: When the predicted near-future temperature reaches the limit, it quickly ramps down to a “sane” level. Usually this happens significantly before the body temp actually reaches the limit.
  • Middle / PID: Normal PID regulation, but it won’t go above the “sane” level or below the “safe” level.
  • Bottom / None: No regulation. Not bright enough to need it.

I generally set the “sane” level by measuring what the light can sustain at about 50 C, and then doubling it. So, if a hot-rod light can sustain 2000 lm at 50 C, the “sane” level would be about 4000 lm. During use, turbo would ramp down quickly to 4000 lm, then it use PID to continuously search for an ideal level.

On some lights, the “paranoid” part isn’t there, so it only has a PID zone and a no-regulation zone. This is the case for anything with a reasonably low power-to-mass ratio. For example, the D4 build has a “paranoid” zone, but the D4S does not… because the D4S doesn’t heat up fast enough to need it.

The PL47 has all three zones, so it is expected to do a quick ramp down from turbo, and then it should do normal regulation. However, Fireflies never provided a dev host suitable for thermal testing, so the settings are largely based on guesses instead of measurements. I don’t even have a production model of the PL47. And for the E07, I wasn’t even involved… it just re-uses the PL47 driver without changes, so it hasn’t been calibrated at all.

The D18 development was done more cautiously though… like, tested and measured on actual hardware, adjusted, measured again, adjusted, measured again, etc.

From the factory, the attiny85 isn’t calibrated very well. Its temperature readings vary within a range of about 25 C. So one light may regulate to 45 C, and another might regulate to 70 C, and they both think they’re doing the same thing.

This is the reason why Anduril has a temperature calibration function. It’s necessary in order to work around the hardware’s lack of factory calibration. That way, after the user calibrates it, it should be within a degree or two of the actual temperature, instead of being off by up to 25 C.

The Meteor’s manual says it should sustain 50 C, but mine seemed significantly hotter. I’m guessing this was due to the lack of factory calibration combined with the lack of a user calibration function.

lol considering the amount of menus, options and vast clicking I do with my single Anduril light so far. I’ll gladly take the 0.4s delay over the light switching off every time I don’t multi click fast enough.

I’ve already noticed that the speed at which I can multi click is impressive and pleasant. In the beginning I was modifying hand placement so I could rapidly mash the button like a video game controller. Now that I’ve played with it for enough time to understand the limit of it registering a multi click sequence I’m very pleased indeed.

To be honest the one major downfall I’ve experienced has nothing to do with the UI but the switch on my PL47 itself. When choosing such a click happy UI they made two major mistakes: the button isn’t great, it feels like I want to click it perpendicular so that the tip of my finger depresses the small center switch or it doesn’t give me much feedback at all (audible click or significant depression of the switch) second: I don’t know what FF was thinking when it came to edge refinement on the PL47. A 45 deg light with a top switch means your thumb will rest half on the switch, half on the back of the head which just so happens to be precision machined into a 90 deg corner that jabs harshly into your thumb. I don’t know how that was overlooked but there should be a 45 deg transition instead of a harsh 90 deg corner.

Other than that, thank you for the explanation solving the problem I didn’t know existed until today. Like I said prior, I still love Anduril the most and now that I know the delay is a trade off in the place of not having to street fighter style furiously button mash in order to multi click I’m sold!

Thanks for the great detail.

Given the paranoid/predictive settings that should be present on my PL47 I can say there appears to be something wrong with the paranoid setting. I will now do a much more detailed test using timers and temp but I can’t measure output. In my experiences thus far, it appears like my PL47 ramps down but due to the incredibly rapid rate the metal was heating it can’t stop the momentum and even the significantly lower regulated level keeps pushing the momentum enough to force the temp up.

When I run another test I’ll do it for an extended period to determine if the light blasts through 50 then more slowly reaches 65-70 but then finally begins cooling back down to stabilize near 50. I’m very doubtful given that when my PL47 reaches 50+ and I turn it off completely it takes about 1.5 mins to reach 40C. Any additional heat sources with so little mass to dissipate that heat just doesn’t seem possible

Does the D18 utilize PWM?

I’d also like to know what the major disadvantages of PWM are?

I’ve read a thread here explaining drivers and it touched on PWM. To me it makes sense that you wouldn’t want a high frequency power switching over a constant current at reduced voltage but what makes PWM so bad? I’ve seen some prominent reviewers who have entire sections on every review dedicated to whether it has PWM or not and if it does, what the details of the PWM are.

On that note, to try and test my new knowledge: PWM would be used in place of a buck driver that would simply shed off the higher voltage when running on lower output levels?

If PWM frequency is low enough, it might become slightly visible. When it’s in the several thousands though, you’re never going to notice it.

PWM is usually a bit less efficient, but on the plus side you’re able to mange tint deviation better.

PWM v. Current Control

  • Efficiency - Both Current Control and PWM should have the same efficiency at maximum power. However at lower power settings, Current Control is more efficient. This is because LEDs get more efficient in terms of lumens per watt the less power you put into them. Current Control actually lowers the power to the LED. PWM runs the LED at full power all the time but just switches on and off rapidly. This increased efficiency at lower power can be substantial and is the main advantage of Current Control.
  • Tint shift - Tint does not change even when going to lower power or moonlight modes. This is an advantage over Current Control. With Current Control, the tint may become more orange and greenish on lower power settings. True “tint snobs” should prefer PWM over Current Control because of this.
  • Strobing from low frequency PWM - If the LED doesn’t pulse on and off fast enough the pulsing can be visible to the eye and creates an unpleasant strobing effect. You can tell if a light has low frequency PWM by turn it to an intermediate power setting and then quickly swiping the beam over a wall. Instead of a smooth blur you’ll see multiple copies of the hotspot. You can also tell if low frequency PWM is present by turning a light on in intermediate brightness and pointing it at a blowing table fan. Many cheap budget lights use low-frequency PWM. Most of the hostility people have against PWM is because they experienced low frequency PWM and do not realize that high frequency PWM exists.
  • High frequency PWM - if the PWM is high enough frequency it becomes impossible to see with the naked eye. There is no unpleasant strobing effect and the PWM isn’t visible even with the ceiling or fan test. Without the strobing, high frequency PWM looks just as good as Current Control. Most good quality lights, and all BLF enthusiast drivers with PWM use high-frequency PWM.

Interesting, thank you!

So nobody knows yet if the D18 has PWM?

If it does, it sounds like the SST-20 4000K with it’s greenish low level tint wouldn’t be an issue at all then

Ahh, of course it does, except at max 7135, bank or FET.

TK could explain it better, but I believe we use ~15.6K PWM - pretty high. Think Anduril can use lower rates at the low end. Only lights our drivers don't use PWM's on is the buck driver setup like for the BLF GT, but technically we still use PWM's to control output levels, but believe there's no flicker... Or something like that.... Sorry, no time to research it all out..

Basically any 7135 and/or FET driver with levels uses PWM's - we use like 150 levels, so PWM's is definitely used. This goes back to the Nanji driver days - no buck, or boost, but amp regulators and FET's.

There was a time when 1K PWM was super high. I've had lights that were like 150-300 I believe back then. Our early Nanji custom firmware for 13A's went to 2K or 4K I believe - that was considered amaz'n.

No that’s not how a buck driver works, it does not “shed off” the higher voltage. It is a DC-DC converter that converts the cell voltage to the lower one required by the LED, typically with good efficiency, 90% or more is possible. The output to the LED is smooth DC, with a little ripple. They control the LED brightness by measuring the current through it, usually through a low-value sense resistor, and regulating it to the desired level by modulating the operation of the conversion electronics. Basically they are a mini switch-mode power supply with variable current regulated output.

Buck converters work the same way, but boost the voltage. Necessary to e.g. run a higher voltage LED from a single cell, or a chain of LEDs in series.

Boost-buck converters can operate either way, useful for e.g. a torch designed to accept a variety of cells, such as a single 1.2V NiMH, 1.5V alkaline, 1.7V primary lithium, 3-3.2V primary lithium, 4.2V secondary LiIon, etc. in one cell, two cell or greater multiples using extension tubes.

Being mini SMPS they are complex, require bulky specialised magnetic components, and require considerable skill, and good expensive test equipment to design.

BLF derived drivers such as this one are far simpler things, that just do one job, with one cell type.

7135 constant current drivers are used at the lower levels. Each supplies 350mA. These are PWMed at high (invisible) frequency to reduce the output for lower levels.

There are limitations on how fast they can be PWMed, at very short pulse widths the rise and fall times of the output become significant. Some makes, even batches, are much better than others.

This design uses two banks of 7135s. A single one for the lowest levels, down to moonlight, and a set of 14 for the higher ones (it does have 18 LEDs).

For continuous use, without over heating, that is sufficient.

For the highest levels a FET switch is used to directly connect the cell to the LED, PWMed or DC. There is no current control, the LED has to burn off any excess power as heat, within its junction. Commonly the LED is than operating beyond it’s peak efficiency, or manufacturer’s current and power ratings. PWMing it doesn’t alter this. Still they work. FET lifetime may be reduced, but that’s not usually significant for most. And those that care about fading will probably already have swapped the LEDs for newer ones before then.

When the FET is in use the brightness is un-regulated, and tracks the cell voltage as it discharges, sometimes becoming lower than what is achievable by the 7135s, well before the cell is substantially depleted.

E.g. on this, the LEDs are driven at 290mA each by 7135s. Under FET operation, that will be far higher. Cell internal resistance and peak current capability now become important factors.

Hence our keen interest in the LED transfer characteristics and evaluations posted here by some experts. Cell, springs, wires, FET choice, improving current path, thermal path from the LED, DTP copper MCPCBs, tail current measurement etc.

Boost/Buck drivers can overcome some of these concerns by compensating for many of these losses, controlling LED current through the cell discharge, and, done well, being more efficient (better run time).

These are selling points for commercial torches, where tables of Lumens vs. run-time are studied by consumers.

The MF01 for example has a sophisticated boost driver, though the new version is to have a BLF architecture, which will doubtless reduce manufacturing costs, which should be good for the consumer.

This is a tried and tested architecture that works well, is friendly for firmware developers, uses the minimum of inexpensive readily available, components, understandable, assembly is practical for DIY builders, modders and repairers even with the minimum of tools, and can be easily copy-pasted by any aspiring PCB layout engineer or manufacturer to develop their own variants, using a basic 2-layer PCB.

PS: some actively seek out torches with low frequency PWM, e.g. for light-painting effects.

Now there’s a thought, add a light-painting configuration to the firmware, so they can adjust it down.

Thanks Tom Tom,

I’m going to do some much needed deeper research on drivers and try to cross reference your very long and informative post.

Appreciate you taking the time to help a muggle see the light

Mattadores, it helps to understand the mysterious inductor and it’s magical properties. Lol

It can increase voltage if you short the output to ground very quickly or decrease voltage as you pulse it’s input quickly which is why you see them in both Boost and Buck drivers. It’s all really complicated and I don’t fully understand it.

FET drivers are easier to understand as they are more or less on/off switches. With them it’s a matter of modulating the width of the “on” signal pulses from the mcu to it’s gate or Pulse Width Modulation (PWM).

Drivers are really complicated, unless you’ve actually studied electronics. I’m just barely teaching myself thanks to this new flashlight hobby. Don’t expect to pick this stuff up quickly because it’s so dense. I’d start by trying to understand the FET driver. It’s the simplist and fairly common around here. I’ve got some Texas Avenger FET driver schematics if your interested.

I know Tom Tom is going to hate these videos, but check out these Louis Rossman basic electronics videos. He mainly deals with laptops, but all the same basic principles apply. Laptops use FET’s, Buck and Boost circuits plus everything else you see in flashlight drivers. Hope this helps.

You are correct. He may be a decent laptop tech, able to identify failed components and replace, which is a worthy skill. But he soon exceeds his understanding and spouts tripe. His description of a buck driver for example was so pitiful, and wrong, that I couldn’t bear to watch it all the way through.

Not one to listen to. Many many far better resources out there to educate yourself, and not fill up your brain with, frankly, nonsense.

E.g Google “Free online ee course” and choose something like basic circuit theory to start with. Dip in and out and take it at your own pace.

It’s a FET+13+1 driver, so it uses PWM. However, with 4.9A spread between 18 emitters, that’s ~272 mA per emitter on the regulated modes, which isn’t enough to push a green-tinted SST-20 into the pink zone. So it might be greenish, depending on the tint bin used. I’m not really sure; my test hardware has 5000K emitters with no optics, and it looks pretty good that way.

FWIW, here’s how the ramp shape is defined:

  • Levels 1 to 50: 1x7135 (0.1 to ~150 lm)
  • Levels 51 to 100: 1+13x7135 (~150 to ~2000 lm)
  • Levels 101 to 149: 14x7135 + FET (~2000 lm to ~13500 lm)
  • Level 150: FET only (direct drive, ~14000 lm)

So, in the low third, the LEDs actually turn on and off very quickly. Above that, they never actually turn off… they just oscillate between two levels. And at levels 50, 100, and 150, there is no PWM at all. It’s just on, at a steady level.

The default config values for the ramp are:

  • Smooth ramp: 1 to 100, plus turbo.
  • Stepped ramp: 25 to 100 in 7 steps, plus turbo. Steps are: 25, 37, 50, 62, 75, 87, 100

You are correct on all counts. Most of this runs at ~15.6 kHz. It drops the speed only on the lowest few levels, in order to increase runtime and improve stability.

The BLF GT uses a buck driver which converts extra voltage into current, and maintains constant output. From 10% to 100% power, it has steady output with no flicker. From 0% to 10, it sets the buck chip to 10 power and uses PWM to adjust brightness by quickly turning the LED on and off. This increases the resolution and stability of the low modes, and there is no visual indication when it changes methods. It happens so smoothly it’s invisible.

TK, can you explain a bit more how the mcu controls the 7135 channels? You say it oscillates between 2 levels without turning off. What 2 levels?