Luminous capacity - does it make sense?

Trying to reduce a light to just a few numbers has its critics, and rightly so. But there is utility in simplicity, at least as a first cut or a way of comparison. So, instead of, for example, reporting the spectral graph of the light output, the CCT and CRI (and maybe R9 and duv) are given as a guide.

I recently learned about ‘throw index’ or max intensity divided by luminous flux in candelas per lumens - I still don’t know how the flashlight beam is really distributed but at least I can intelligently guess whether it’s a super-thrower or a lantern or something in between. Some reviewers like ZeroAir routinely report it along other parameters.

Which brings me to, call it, ‘luminuous capacity’ of a flashlight with a fully charged battery in lumen-hours. Similar to quoting energy content of a battery in Wh at certain loads, the luminous capacity gives the total OFT lumens off one given battery charge, for a given light with a given LED, perhaps at different loads/brightness.

I don’t think it’s a popular measure, but I think I saw it once somewhere before. I find it useful to roughly quantify and compare how much total light a flashlight+battery is able to produce on one charge, more or less independent of its discharge profile. While it is a rough number, and it may vary with light intensity used (lower brightness over longer time may give out more total light than higher discharges lasting shorter, but probably only up to a point) I find it useful to have an idea how much ‘total light’ I’m holding in my hand. I also found it a decent first-cut indicator of driver efficiency, other things being equal.

To calculate it, one needs to find a reliable runtime graph for the flashlight (possibly at different output levels) - lumens vs. hours - and integrate the area(s) under the curve(s). So, as a ‘proof’ of concept I estimated the ‘luminous efficiency’ (maybe there is a better name for it?) for a few 18350 and 18650 lights using published runtime graphs for them:

16340

  • Sofirn SC21 (regulated, non-Anduril): ~300 lm•h
  • Sofirn SC21 Pro (unregulated, Anduril): ~200 lm•h

14500

  • Emisar D3AA: ~350 lm•h

18350

  • Sofirn SC13/519A: ~200 lm•h
  • Wurkkos TS11S: ~250 lm•h
  • Sofirn IF19: ~300 lm•h
  • Skillhunt EC200Mini: ~400 lm•h
  • Trunite Catapult Mini Pro: ~500 lm•h (High) | ~600 lm•h (Med)?
  • Wurkkos FC11C w/18350: ~350 lm•h on High, ~500 lm•h on Low.

18650

  • Sofirn D25LR: ~800 lm•h
  • Sofirn HS42: ~1000 lm•h (High/Med)
  • Sofirn HS21: ~1000 lm•h (on Turbo/High, and higher on Med)
  • Wurkkos FC11 (unregulated): ~1000 lm•h (Med)
  • Wurkkos FC11C: ~1000 lm•h (Turbo/High) | ~1500 lm•h (Med)
  • Zebralight SC64w: ~1200 lm•h

I wonder if such a parameter or comparisons based on it make sense to others and if there are some deep pitfalls in using it that I don’t see.

As mentioned, I treat it as a rough indicator and the precision of the values above are limited (at least +/-10%). The battery and discharge rate also play a role, but, at least to me, this value give me some feel as to how much light I can practically squeeze from a particular flashlight and thus what can be done with it. Roughly.

.
.
.
p.s. This goes right against the simplicity claimed above, would be harder to interpret, and thus it’s probably not a good idea, but I figured I will dot this thought down as well - perhaps taking a square root of lm-h and giving it some fancy name could theoretically be even better as a measure of the ‘total perceived light’ that a flashlight can deliver as it would attempt to take into account brightness perception nonlinearity…

3 Thanks
1 Thank

Lumen-hours is indeed a (rarely used) metric, but I do quite like it.

Obviously, even for a given light, it changes with output as LEDs are more efficient at lower drive levels.

I looked at lumen-hours a while back when I was shopping for a head lamp for work; I needed minimum runtime of 8 hours without a battery change. Most of the lights didn’t have exactly compatible outputs so the lumen-hours metric helped to make them more comparable.

I think this metric is interesting and useful, though somewhat difficult to define.

As you observed, LEDs are less efficient at higher drive levels, while drivers can be extremely inefficient at low drive levels. So it would be interesting to see what drive level to choose in order to make both reasonably efficient. Maybe take all the runtime plots of a light at different levels, integrate them, and take the maximum? This gives a sense of luminous capacity under most ideal conditions, which seems a reasonable metric since the luminous capacity changes with every drive level. Taking a maximum makes more sense than taking an average, since the modes/UI designs are somewhat arbitrary, and taking the maximum is not sensitive to pathological behavior at the low end.

I’d say it’s a v. reasonable idea, especially if this maximum is reported with a sensibly low number of significant figures, given the uncertainties involved in the runtime graphs available and the effects of the battery type and temperature. The few examples I tried above are probably nothing surprising but are quite telling, at least to me, when compared side by side.

I think that if I know max lm•h, along with max lm, min lm, cd/lm, CCT, CRI, and the total weight I would know quite a bit about what to expect from a flashlight without seeing it. Call it the 7-numbers summary :⁠-⁠)

It’s a rather obscure measurement, the only thing I’d change is making it lumens squared, so that a light couldn’t “cheat” a high value by having an agressive stepdown and staying there for a while.

Squaring lumens would be a way to make the stepdown-level more significant into the measurement, though I’m sure there is a better way.

In theory, I would go the other way around - taking the square root of it, as I mentioned in passing earlier. The reason being that the lm·h is supposed to capture the total amount of light perceived, but what the eye sees is not linearly related to ‘lumens’. So, for instance, all other things being equal, the eye will perceive about doubling the brightness if the measured ‘lumens’ quadrupled.

In other words, if a light produces, say, 1000 lm·h, it can shine at 1000 lm for 1 hour, and say another light, with lower capacity can do a quarter of it at 250 lm for 1 hours, the lower capacity light is not necessarily perceived as producing 4 times lees light in an hour, but more like half, given out brightness perception (which would be captured this way if the square root transformation is implemented).

All in all, I don’t think it’s worth it - the straight lm·h is already complex enough a metric - but the interpretation is quite intuitive as everybody understands lumens and hours (and can relate to (k)W·h as often used for electrical energy consumption unit). One can correct for ‘lumens’ non-linear visual perception mentally if needed.

p.s. This goes right against the simplicity claimed above, would be harder to interpret, and thus it’s probably not a good idea , but I figured I will dot this thought down as well - perhaps taking a square root of lm-h and giving it some fancy name could theoretically be even better as a measure of the ‘total perceived light’ that a flashlight can deliver as it would attempt to take into account brightness perception nonlinearity…

The lumen-hours unit is an attempt to concisely and approximately quantify how much ‘light’ one can get out of a particular LED/flashlight/brightness/battery combination - from the flashlight you hold, in shot. As the lights are usually more efficient at lower outputs, the lm·h tends to be lower on Highs and Turbos. An 18650 light may be around 1000 lm·h - varying a bit depending on a driver, LED, optics, level, and battery. The 18350 may be some 300 lm·h. Not an insignificant difference.

There is another interesting aspect of it too. Humans perceive brightness more or less logarithmically (or close to the inverse power law) non-linearly w.r.t. to luminance. So, we to experience an illuminated object as about twice as bright as another one when its measured luminance is about 4 times needs to increase more than twice in comparison. Which, more or less, means that to maximise the perception of brightness over time, it’s better to keep the light at lower level brightness for longer than the other way around - it requires more than twice (possibly around 4 times) the energy power to make things look twice as bright. Which also adheres to common sense - use the lowest level that is useful for as long as you can.

There is something problematic about this: there are lots of folks saying that the perceived brightness obeys a log law or inverse-square law with respect to lumens, but the simple fact that there is no consensus on which law suggests that neither claim is well-founded.

Suppose that the perceived brightness does obey a log law. What this means is that, for every multiplicative increase of xB times in lumens, there is an additive change of +1 to perceived brightness units, where B is the base of the logarithm. This is not compatible with the saying that

where both lumens and perceived brightness change multiplicatively.

Suppose that the perceived brightness obeys an inverse power law, that is, (perceived brightness)=(constant)*(lumens)^p. The saying above then implies that that p=1/2, i.e., perceived brightness is (constant)*sqrt(lumens). But this is not an inverse power law where one quantity decreases while the other increases, such as intensity versus distance–in our case both quantities increase monotonically with each other!

Please take a moment to reconsider and help stop the spread of potential misinformation!

1 Thank

I stand (mostly :-)) corrected. Thank you. I made amends to OP. But…

This is a super interesting if complex topic: what’s the relationship between the difference in luminance of an object (which can be easily measured) to what humans perceive as the difference in its brightness (which is harder to measure)?

Models abound, ranging from Weber-Fechner ‘law’, through Steven’s Power ‘law’ version of it, through more complex relationships such as this one (Figure 14 taken from this paper). Context makes a difference.

Here is the comparison of a few simple, closed-form functions that could map luminance to perceived brightness (almost certainly wrong, but perhaps approximately applicable to at least show ‘convex’ shape of the relationship, if not the degree or precise nature of the nonlinearity):

My point is that in the flashlight context, all other things being equal, the luminance of an object illuminated by a flashlight is proportional to the luminous flux (lumens) of the flashlight - you double the lumens and the object’s luminance doubles with it. But the object brightness increase we perceive will be less than twice (how much less is debatable).

The simplest approximation - power law with the exponent of 1/2 or NormalisedBrightness = sqrt(NormalisedLuminance) - is close to the (inverse) Gamma correction widely used in photography or in computer screens to bring the luminance changes more in line with the brightness changes we perceive.

I think it may be reasonable to use it for flashlights as well, if approximately, and it aligns with practical experience - a 1000 lm beam does not ‘feel’ twice as ‘bright’ as 500 lm beam from the same flashlight (but the 2000 lm beam possibly will). The reality is almost certainly more complex, but nothing is perfect :slight_smile:

So, going back to the initial post, this means, I think, that assuming the flashlight (i.e. optics+LED+driver+battery) holds the same number of lm·h regardless of output (not true but close enough), reducing the brightness by perceptive half would reduce the battery draw perhaps 4 times (and thus approximately quadruple its runtime as the result). So if I want maximum ‘perceived brightness’ squeezed from a flashlight, reducing brightness slightly may increase the runtime significantly more and in effect give you more ‘perceived light’ total - even though the total lm·h doesn’t change. It’s the perceived quality of light that ultimately matters.

In other words, if you have two flashlights with the same lm·h, one set on higher output, the other on lower, the latter will be perceived as giving more total ‘light’ overall not because LEDs often have higher efficacy at lower powers (which may also contribute), but because the brightness we perceive varies nonlinearly with luminance. This is what I meant.

2 Thanks

Thank you so much for furthering this discussion! The articles you linked were excellent reads, and the Desmos plots were illuminating.

The Weber-Fechner log law has been used in the context of brightness in the form of apparent magnitude of celestial objects, and provides a logarithmic version of the unit lux. The Steven’s power law is also interesting, though the dependence of the power on the type of stimulus is somewhat concerning. But taking the power 1/2, as many have done, makes perceived brightness proportional to throw distance, which is convenient. You also pointed out a connection with gamma correction. There is a good argument for both the power law and the log law, and it’s unclear which one to choose.

The more wacky figure you posted serves as a good reminder that perceived brightness, similar to the definition of the lumen, may forever remain a purely psychological construct, not derivable from any set of physical constants.

In the context of defining the brightness-hour, both the log law and the power law run into the issue that, upon taking drive current to zero, the limit of achievable brightness-hour may diverge to infinity (due to the brightness vs lumen curve having infinite slope at 0). I see that you have noticed this issue with the log law, and proposed a correction by replacing log(x) with log(x+1), which is a natural choice since now zero lumens corresponds to zero perceived brightness (instead of negative-infinity perceived brightness). It is unclear what an analogous fix would be for the power law.

Also worth noting is that assuming a strictly convex relation between perceived brightness and lumens, there will always exist some point at which the rate of change in perceived brightness is maximized (and greater than 1), and it seems reasonable for this point to occur at zero lumens. The interesting question is, what is this maximum slope, and what could it reasonably mean?

1 Thank

Good point. I never treated throw distances seriously as they are calculated based on the puny terminal illumination of 0.25 lux, but I realised, that their ratio may be a reasonable practical proxy indicator for perceived flashlight intensity or the ability to throw.

If one light cranks 50,000 cd and the other 100,000 cd, the latter will look brighter and reach further but perceptually not twice as much (maybe some 40% greater?). Comparing the throw distance specs may provide more realistic reflection of their relative performance.

Who said that things can’t be both: convenient and approximately correct :⁠-⁠)

3 Thanks

I just learned that there is a derived SI unit of luminous energy called talbot, which is equal to 1 lm·s, making 1 lm·h equal 3600 talbots.

5 Thanks

Nice find! But you are still contributing something essentially new because the talbot alone does not provide a well-defined measure of “luminous capacity” for a light: the value one gets changes depending on the mode!

Saw your unit used in this review!

[Review] NITECORE TM28 (4x CREE XHP35 HI, 4x 18650) | Candle Power Flashlight Forum

The implementation is problematic, however, being a comparison of turbo modes. This means if one were to remove turbo mode from a light, which objectively makes it a worse light, it actually increases the “lumen-hr on maximum mode” metric. Hence my suggestion to take the lm-hr reading for all modes, and report the maximum, which can only go down by removing modes, or go up by adding modes.

1 Thank

I agree - the idea of reporting the maximum lm•hr for the light/battery system across the modes has merit. From what I’ve seen around the lowish-medium mode often has an edge, sometimes significant - it could be around 1500 lm•hr for an efficiently driven 18650 light on Med but around 1000 lm•hr on high/turbo. Another issue is getting accurate runtime plots to integrate - they can differ quite a bit between reviewers testing the same light, I just realised.

1 Thank

Another way to calculate something similar is lm/watt. From there you can just multiply by the measured capacity of a battery (watt-hours) at that given amperage to get lumen-hours.

You can calculate these from many existing reviews that have lumens and amp measurements. There is the caveat that most reviews will measure lumens and amps separately (since attaching an amp meter will lower the lumen output and most readers care about lumen measurements). Also, depending on the battery there will be a different voltage drop, but at lower output levels it should be quite reliable.

A relevant discussion can be found here

I think both such metrics are useful

  • lm/W I think is called efficacy and can give a feel how well the driver/LED/optics in a light uses the battery.
  • lm•h may be more useful to get a feel how much ‘light’ one can squeeze out of a light as a whole (driver/LED/battery/optics). I find it useful to estimate runtimes as a function of chosen intensity.

The caveat is that both likely will depend on the output level. It may be a good idea to pick the maximum achievable value for both which should be around mid-low levels for many lights (as suggested before in this thread).

Knowing these two metrics at least approximately, along with cd/lm, can tell quite a bit about how the light might function.

1 Thank