Nonlinearity of luminance (ratio of light flux / brightness in reflector)?

I already talked about it in the thread on the LED test of the FFL351A: In the course of developing a new test process for luminance and migrating old data sets to the new format, I came across an interesting problem. It’s about the non-linearity of the ratio of brightness to luminous flux. This ratio indirectly results in the luminance.
Perhaps one of you has an idea what’s going on here. (Excuse me for having the charts in German :smiley: )

Normally, the ratio of luminous flux to brightness in an unchanged reflector with no change in focus etc. should be proportional to each other depending on the operating current: if the operating current increases, the luminous flux and thus the brightness of the LED in a reflector increases.

However, this is not the case with virtually any LED. Only the 519A stands out positively here with its relatively flat curve.

Although the luxmeter plays a decisive role in the shape of the curve (see GM1020, the curve goes completely crazy due to the automatic switching of the measuring range which destroys the precision and linearity of this device), they are completely different for the LEDs shown below despite the identical luxmeter and identical test setup (same power supply unit, same cables, same distance, same reflector, same temperature and ambient brightness <1 lux).

To rule out the influence of the ambient temperature, I ran the “Yinding 5050” 95 CRI glass at high current for a few minutes to warm up the heat sink by around 20 °C. I then carried out the same test again, but with the fan switched on and the heatsink cooled down to around 25 °C beforehand. Although the curves are slightly different (especially because the LED in warmer state no longer reaches its actual maximum operating current due to the increased temperature and therefore the luminous flux from the test with a colder Tsp no longer fits), the shape is identical:

Even the use of different luxmeters in the integrating sphere is irrelevant, because even in this constellation (same luxmeter for luminous flux test in integrating sphere and luminance measurement) there are significant deviations that are different for each LED.

So - according to my conclusion - it must be the LED itself, because that is the only variable that has changed. I don’t know the reason. Perhaps the LED changes its luminance non-linearly to the luminous flux, possibly due to a minimal change in the luminous surface, or perhaps due to changes in the properties of the silicone. However, this does not apply to the SFT-70, which has no silicone at all over the LED chips and should be unaffected by this. Another idea would be that certain phosphor particles are oversaturated or have changed light transmission etc. due to increasing temperature because of the higher operating current, which could also explain the sometimes strong color shift with rising current.

Do any of you have any ideas? There must be some reason for this, and at least it doesn’t seem to be my measuring method that is causing it…

Regards, Dominik

2 Thanks

One idea I have would be that it is the effect of something phosphor related. Since LEDs change CCT and DUV over brightness too, my best bet would be that it is some saturation effect of the phosphor or similar…

I can not think of any simple effect that could be going on at a semiconductor level. I doubt any form of cavity enhancement induced coherence (the thing is approaching the barrier between led and laser) can be going on, not at typical DIE thicknesses and optical wavelengths. But tbh I have no idea how thick a LED die is either. But if it were showing cavity enhancement of certain frequencies, those should peak out of the spectrum. Not happening

Carrier density inside the chip could maybe be changing, leading to more recombination in the edges or in the center of the chip. That could definitely affect luminosity, and it would be dependant on how the chips are bonded. But I can not think of any effect that would explain why this happens, either.

So… Uh. Dunno. Are you sure it is not nonlinearities of the flux meter and of the instrument that measures the integrated flux, that behave slightly differently depending on the spectrum they are measuring? That would be the most reasonable explanation I have.

Can you see if you find any common shape between similar CRI, similar CCT, similar DUV, similar type (flip chip or regular, multi emitter or single)? So basically, if 2 LEDs show similar shape, see what they have in common, and then check if you can spot any clear pattern.

My instinct says the curves could be heat related, so maybe the localised effect of heat on the phospher?
The XC-3535’s reversed curve is one to look at, i can’t find any posts on it but seem to remember the DUV increasing with current in contrast to most other LEDs that see a decrease in DUV, so if my memory is correct this could be linked.

Edit: It could also be to do with the boundary between the phospher and the LES, expansion might affect the path the photons take for example.
Pure guesswork ofc :slight_smile:

3 Thanks

Note that luminance, currently defined as lux/lm, has a unit of area^{-1}. Since the relevant areas (reflector area and LES) in the setup is completely unchanged, the discrepancy probably comes down to a change in light distribution, probably on the LED.

The saturation of phosphor, as previously mentioned, is a well-known phenomenon. At higher flux densities the phosphor becomes less efficient and possibly also less transparent, which is part of the reason why the output-over-power curve is concave. An LED with a damaged phosphor and an intact die peaks in output much sooner than does a fully intact LED.

The flux density of an LED is not constant across its surface, being higher closer to center (which is the part that contributes most to intensity measurements) and lower toward the edges. It would seem sensible to therefore guess that different parts of the LES saturate at slightly different rates, with the regions of higher flux density being saturated to a greater extent than regions of lower flux density.

If this is the case, it could cause the center of the LED to lose efficiency faster (i.e., peak in output sooner as a function of current) than the edges of the LED. A side effect would be that the output, when normalized to a density function over area, becomes less concentrated at the center, and this evening out of the light-over-area distribution could result in a loss of peak (central) luminance. But this is all guesswork and I have no experiment to confirm/deny. It would also fail to explain all of the LEDs except for SFT70 with the reversed trend.

Maybe one could test this by projecting the LES out with a convex lens and measuring the intensity at the center versus edge, and repeating for different drive levels to see if the distribution over area has changed.

2 Thanks

I still think this is a possibility. However, these curves look different even when using the same luxmeter with different LEDs (with a different shape, but still different in the end). If only the luxmeter falsifies the values, but the luminance of the LED does not, the line in the diagram should be completely straight or at least always the same, without any significant outliers. But a comparison with different luxmeters does not show this either. As only the relationship between luminous flux and brightness is considered here, any measurement inaccuracies of the sensor do not play a role, but linearity does. A commercially available luxmeter will not be one hundred percent linear anyway.

This was also my first thought. Seems obvious on first glance, but I am pretty sure, this is not as easy as it seems :smiley:

I will do the following test: I will measure an LED with the highest possible maximum operating current (XHP50.2 HD or similar) up to max. 3-5 A or so. Luminous flux and luminance are measured with the same instrument, as a luxmeter is used for both measured variables.
The limitation to 3-5 A serves to avoid heat-related effects and thus reaching the Tj as far as possible, as the heat sink can dissipate over 100 W well (even 150 W is not a problem for a short time until the heat sink reaches thermal saturation or the temperature continues to rise despite forced cooling). Of course, there will be temperature-related effects, but these should not be significant in this area, especially for the silicone and phosphor.

The power supply unit and cable, connection, reflow, ambient temperature and ambient light are left completely unchanged. The reflector always sits on the LED with a suitable centering aid so that any effects cannot be triggered by unintentional movement of the reflector during the test.

In this setup, the only variable that changes is the LED itself. Here the curve, and especially the trend line - if the luxmeter is responsible for all the effects shown here - should be more or less flat in order to show linearity. (With individual small peaks due to display inaccuracy and power supply).

I have a simple aspherical lens lying around. However, I don’t know how good it is in terms of the quality of the glass and the shape (it was bought on Aliexpress a few years ago and was very cheap, around 10 $ or so). I can imagine that the lens can massively change the luminance of the individual areas of the LED chip…

Maybe I’ll give it a try. My idea was to record the beam pattern / print of LES on the whitewall with a camera and “measure” the intensity on the images. Is this a feasible way or should I measure the individual areas directly with the luxmeter?

2 Thanks

This remark clearly shows that the above posts are way above and beyond my paygrade.

The white lens in front of your light meter is a filter designed to match the sensitivity for different colors of the sensor to the sensitivity of the human eye.
Is it possible that the spectrum of the emitter varies with the current? And therefore the one linear factor you’re looking for doesn’t exist?

Good point, but so far I know this is not the white diffuser but a special glass windows above the sensor which filters some wavelengths. I could remove it at one of my ‘play-around’ lux meters to check this theory. But to be honest I don’t think this is the reason for these deviations since the spectrum does not changing much with increasing current.

I think the quality of the lens is a very valid concern to have, even minor imperfections can have a significant effect on the distribution of light in the image, like caustic lenses. I am generally skeptical of the resolution and dynamic range of cameras when it comes to intensity; since the effect we are measuring is on the order of a few percent, I doubt a camera can pick it up, and would think a lux meter is probably better. But I am concerned about how grainy the phosphor is in many modern LEDs, which can cause significant variations in intensity (more than a few percent) across very small regions (e.g., a grain of green phosphor next to a grain of red phosphor), which means the placement of the lux meter has to be very exact. Maybe the lens can be slightly frosted to average things out, but I don’t know if it’s worth it.

1 Thank

Since the lens is only there to measure a relative change between power levels, not an absolute measurement, i would think nearly any lens would do.

My question is if a lens is needed at all. Since, if i read your comment right, all we are looking for is a change in the distribution of light, measuring off the raw emitter should show you if your theory is correct.

The distance would have to be EXACT however, even a small variation in the location and distance could cause enough of a change. Ideally you would have a hemisphere over the led and on it a series of mounts and holes for the meter. Then to be precise you would take a measurement at all locations, and multille times for each location.

There are two distributions at play here: (1) the distribution of light output over the die of the LED, and (2) the distribution of light output over angle; my conjecture was that distribution (1) is not fixed across power levels.

The difficulty I think is that the two distributions do not offer information about each other. For example, imagine a 3x3 array of LEDs where only the outer ring of 8 LEDs are lit, and compare that to the same array where all 9 are lit. If viewed as a single large LED, the distribution of light over area is very different; the former would have zero intensity in the center. However, the distribution of light over angle would be essentially the same at any nontrivial distance, since summing copies of the same distribution does not change it.

What i missed was that to measure distribution (1) that one would need to use the lens to get a (perfectly?) focused image of the led die. Or, in essence, a magnifying glass of the emitter surface. Not that that isnt what you were saying, i just didnt catch that at first.

I would offer then that another component of this could be changes to the distribution of light over angle. Temperature variations across the emitting surface, or possibly even mechanical warping due to heat, could subtly change that distribution.

This could be tested by measuring lux at distinct points moving out from center, at various power levels.

That’s a very interesting possibility that I have not yet considered! In an ideal Lambertian source the distribution of light would be such that the apparent intensity of the die is constant across viewing angles, but real-life LEDs are not perfectly Lambertian, and a reflector only picks up the portion of the output at angles far from perpendicular, not the integrated output–this is part of the reason why deeper reflectors throw a bit better when with the same diameter. A change in angular distribution could change the apparent intensity at certain viewing angles and thus the intensity of the beam.

I have a question about units here. If brightness (assuming your y axis represents the statement above) is measured in lux (= lm/m^2) and luminous flux is measured in lm, then ratio of the two becomes (1/m^2). However Luminance is measured in cd/m^2 (= lm/(sr⋅m^2)). Then, it would seem that we lost cd along the way.

I would offer another option, but one i have neither the knowledge nor precise tools to even think of beyond the hypothetical.

It is possible we could be seeing “steering” of the beam, similar to how an AESA radar works. Essentially emission points next to each other influence the direction of travel.

I have seen research on this and iirc it has been used. Granted, those systems are designed to steer and shape the beam to specific designs, but its not impossible similar effects could happen naturally.

In essence, we are seeing a subtly but natural beem steering through electronic coupling.

Just a theory.

I have always wondered why i absolutely love the light put off by a small raw emitter. It has a completely different feel than anything sent through an sort of lenses

And this depending on the operating current of the LED without changing anything else on the setup?
This is the problem I have with this. Of course the luminance is calculated by the way you told but ratio of brightness and luminous flux is still nonlinear, what shouldn’t be the case here.

I see what you are saying, lux/lm cancels out cd, so I would also expect linearity there. I’d venture to say that If your observations are not due to measurement error, then it seems like you have invented a new unit of measurement (1/m^2) by which we can compare LEDs. Sounds absurd. Does this reproduce with multiple sensors?

I discovered this effect with different lux meters (sensors), although I showed this only with one LED due to limited time, see this diagram:

This effect was present with different LEDs on different lux meters and changes from LED to LED. So even if there is nonlinearity of the sensor, the effect is still there.

The green curve is from a lux meter which has a problem with the automatic switching of the measurement range, the values differe slightly so there is the huge jump at around 1 Amp current.