Proposed Experiment: does dedoming reduce radiometric output?

I decided to start this discussion after carefully thinking about the observation that dedoming an 519A is said to cost 15% lumens.
One easy assumption to make is that the 15% of lost lumens went into heat and other forms of inefficiency. However, dedoming is known to lower CCT, which entails exchanging blue/green light for red light, which contributes far less to the lumen count due to lower sensitivity of the human eye. For the same radiometric power, red is much dimmer than green, so it is not clear how much of the 15% lumen loss is accounted for by this phenomenon, and how much is lost to heat and other inefficiencies.
To quantify this phenomenon, I propose the following experiment, and would be very grateful if someone with the right equipment could carry it out. Take a sample each of 519A 5700K and 4500K. It is known that the 4500K variant is actually closer to 4200K, which happens to be the CCT of the 5700K post-dedome. Measure the lumens of both domed emitters at the same drive current. Then, dedome the 5700K, and measure its output at the same drive current. The difference between the ratios (dedomed 5700K)/(domed 5700K) and (domed 4500K)/(domed 5700K) tells you the percentage lumen loss that cannot be explained by CCT shift.
As a concluding remark, I think lumens is not a great way to quantify the output or efficiency of a high CRI emitter. Take the 519A we all know and love, which is 95CRI and 80+ R9. What happens when you take some deep red output from it and replace it with green output with the same radiometric power? (I bet none of you would want to make this change.) Well, the R9 count goes down due to the loss in deep red, but lumens would increase because the human eye is more sensitive to green than red at the same power. You'd get a greener 519A with lower R9 and higher output...sounds like an LH351D! This is my rationale for the opinion that the 519A might be more power-efficient than, and thus superior to, the LH351D, despite the slightly lower lumens.
Let me know your thoughts about the above reasoning!

this thread has data and info that may help answer your questions:

IIRC CREE mentions 10% output loss for flat lens at the same CCT.
But yeah here with dedoming the CCT drops and the proportion of green also decreases.

To measure the radiant flux difference we need a good sphere (barium sulphate coated, color calibrated) and a spectrometer.

Thank you for sharing the thread. Unfortunately, the difference in drive current makes cross-comparison between 5700K and 5000K impossible. But it is an interesting and informative read nonetheless, and the BLF thread you linked in the comment corrected my understanding of how intensity is gained via dedoming.

Here’s Dr. Jone’s excellent post from 2012 on Domes, Dedoming and Throw

It is pretty much the definitive explanation of what dedoming actually does as discussed on BLF.

Although I've already seen this thread, thank you for sharing it! It's an extremely well-written technical explanation on what dedoming does and why. I know at a qualitative level what dedoming does, and am hoping for specific comparison data between 519A emitters at different CCTs to quantify the amount of lumens loss attributed (or not) to the warmer shift in CCT and downward shift in duv.

If a significant portion of the loss can be attributed by CCT and duv shift, then the claim that dedoming makes an emitter (insert measured percentage) less efficient should be treated as a myth--that is, debunked.

On a related note, it would be very interesting if someone could get the spectrum data (in csv or other workable format) for a bunch of high CRI emitters, from which I can then calculate radiometric output, which I argue is a better efficiency metric than lumens for high CRI emitters. I suspect that 519A would come on top over the LH351D.

Radiometric output and lumens have very similar formulas if I understand correctly--one integrates the spectrum against the energy of a certain wavelength for radiometric, or against the luminosity function for lumens.

I’m confused.

Firstly, i thought the lumens loss from dedoming a 519A was measurable, not an observation.

Secondly, i thought the efficiency of an LED was defined as lumens per watt, so even if the lumens loss can be attributed as you state there are still less lumens for the same power so efficiency is reduced.
How are you defining efficiency?

Good questions--let me clarify.

Lumens/Watt is technically efficacy, not efficiency. Imagine a monochromatic UV light that converts 100% of the input electrical power to radiant power--since it is not visible, the lm/W ratio is 0, but it would be unreasonable to say that this light is inefficient. It's more accurate to say that the device is not efficacious for visible light production.

Efficiency is a unitless quantity: (Watts of radiant output)/(Watts of electrical input); notice that the Watts cancel out. In the previous example, the UV light would have 100% efficiency.

And yes, lumens loss from dedoming is well-established. However, I argue that lumens is a poor metric for output of high CRI sources, and that efficiency is a better metric instead. I anticipate a strong counterexample to said claim/argument--anyone cares to guess what it is and why? (Hint: think outside of LEDs.)

Ok, so i’m a little less confused. Thanks for explaining that i mean efficacy when i said efficiency.

I’m still trying to understand what it is you want to establish though.

The dedome referred to to in the first post results in less lumens so the light appears dimmer, it is less efficacious (:D). Surely that’s the relevant point?
How is knowing that it’s just as efficient after the dedome more relevant that knowing it will be 15% dimmer?

Thank you for continuing the conversation, and I'm glad to have cleared things up!

If dedoming does not change efficiency, then I would feel much more inclined to do it because I would know that the 15% loss in lumens is fully compensated by better CCT and tint, and that there is no additional waste heat generated. It makes me feel better. Sorry, I'm kinda obsessive like that :D

The above hypothesis is likely not strictly true, but I am willing to claim that the true efficiency loss is significantly less than 15%, and it would be really nice if someone could actually test this claim. I imagine it would be a difficult undertaking, however, as the only way I know to calculate efficiency (i.e., radiometric power) is to get a plot of the spectrum and integrate against (some physical constant)/(wavelength). One might also have to normalize the spectrum by computing its efficacy, which might get ugly fast because I can't imagine there being a universally accepted luminosity function, given that it is derived from subjective experience rather than fundamental laws of physics. Anyways, I'll stop rambling now.

On a sidenote, I am very curious about how color-blind people experience brightness, and whether they have a different luminosity function.

P. S. Apparently the luminosity function is also called the luminosity efficiency function, which is confusing terminology as what it measures is much closer to efficacy!

is the total output reduced (averaged across all wavelengths)? i was under the impression that this was the case.

if so, most or all of that is due to reflection (including increased TIR) leading to reabsorption. that would generate additional waste heat and reduced efficiency.

i wish someone could test it like you suggest.

Strictly the answer is probably "yes" (I believe in your explanation of reabsorption being not perfectly efficient), but I am curious about the exact percentage, and suspect that it is much less than the commonly cited 10-15%.

I suddenly had another idea for testing radiometric output that doesn't involve a spectrophotometer. There is a much easier way to integrate the spectrum to obtain a quantity that varies positively with radiometric power, which is to convert all the light into heat!

Get 2 thermometers, and put a thin, black, ideally thermally conductive coating on the probes, which should be large enough to collect the entire beam of the flashlight. Then shine the lights at the thermometers under identical setup, and see which one warms up faster or sustains a higher temp at equilibrium! Repeat the experiment and swap the lights a bunch of times to see if there is a significant difference. If the data results in no significant difference, check that the setup is sufficiently sensitive by using the same light on different modes to see how much the equilibrium temp changes.

i don't see how that would work. don't you need to capture all of the photons being emitted to test efficiency? only some of the light is hitting the thermometer.

Oh yes, I forgot to mention, the thermometer needs to have a large receiving area. That's a good catch and I'll edit my previous post!

lol. i am envisioning

  • a thermally-insulated integrating sphere painted black. maybe a styrofoam sphere with interior painted matte black, and a plug is bored out to insert the flashlight head.
  • a thermometer as the detection instrument (e.g. thermocouple) suspended roughly in the center

basically a type of calorimeter. you run the light for a set amount of time (say, two minutes). then you remove the light and plug the hole.

after removing the light and plugging the calorimeter, you log the thermometer data and look wait for the reading to stabilize. the signal may go up a bit as it reaches thermal equilibrium and then slowly decay as heat leaks out. the signal peak is the datapoint of interest.

assumptions:

  • the apparatus starts at thermal equilibrium and is at the same initial temperature for each run
  • the amount of heat transferred directly from the head of the flashlight is very small compared to the heat absorbed radiatively by the paint
  • the heat rate input into the apparatus is very large compared to the heat loss rate out of the sphere
  • the energy absorbed by the paint is very large compared to the energy reabsorbed by the flashlight head. or at least the ratio is consistent from run to run

the advantage over a classic integrator is the detector is cheaper

It does drop the output, there was a thread about it on cpf, irrc demoded leds showed less lumens in a sphere, compared to the same led before dedoming

Within this thread, the quantity of interest is radiometric output, measured in watts, which is an essentially different quantity from lumens. Fewer lumens does not imply that less radiant power is emitted.

Ah, well that i understand :slight_smile: I hope you find the answer.