I initially put 4 of them in my D4, but I mainly use it indoor on lower levels…
So I replace 2 of them by some 3000K variant.
Now I have a 3500K ish beam with a nice CRI
The 4000K on their own are quite pleasant, at least under TIR (didn’t try in a reflector).
What I really like is that they have a small die (like XP-G2) so the throw is improved compared with 219C
Yes, it's very easy to see from that graph why is better to use 3000K+5700K emitters than 4000K+5000K emitters to get ~4500K tint, first combination has much better chance to be under BBL.
Yes, the 2700K to 6500K tint mixing should be done with an additional set of 4000K leds in between - I tried to tell this in the BLF lantern topic weeks ago for the exact reasons, which your last figure shows.
Thanks for those helpful graphs. My quad E21A 2500k with 2x 2000k and 2x 3000k is much rosier than just 4x 2000k or 4x 3000k variants. My quad E21A 3500k with 2x 3000k and 2x 4000k is rosier than just 4x 3000k or 4x 4000k.
From my observation, comparing emitters of the same CCT, below the BBL tint does not seem to tint objects as much as above the BBL tint so white objects look whiter with below the BBL tint where as white objects look yellowish/greenish with above the BBL tint. I’m guessing this is why members here said they feel mixing CCT improves color rendering eventhough all it does is give you more below the BBL tint.
I think, it does make sense, since a spectrum, which exactly matches a black body radiation is considered as 100 CRI.
(For now, let’s assume, that a given spectrum is perfect for a given color temperature - we don’t want to mess with correlated color temperature /which is calculated for suboptimal light sources with uneven spectrum/)
Once the tint goes above or below black body line, it is not an accurate black body radiation anymore - so it cannot be 100 CRI in any circumstances.
The reason why many people tends to prefer ‘under BBL’ tints, that it gives more saturation to the scene.
Please correct me, if I know/remember it wrong, but the latest LED tests now work with two summarizing value instead of single CRI (Ra).
The first one is R-fidelity (Rf) - which would mean the accuracy of the color rendering - which can be max. 100.
The second one is R-gamut (Rg), which actually can be more than 100.
If above 100, we speak of adding saturation to the scene (this is when ‘under BBL’ tints), as we get more light for the parts of the spectrum, where human eye is less sensitive, thus color perception gets boosted somewhat (as a ‘side effect’, tint becomes rosy) - this effect can be regarded as pleasant and can be felt as a ‘superior’ light source - making the CRI/Rf drop seem to be of no sense, despite the measurements.
If under 100, we speak of desaturating the scene (when using ‘above BBL’ tints), where we tend to miss blue and red colors from the scene, and we talk about a tint being ‘greenish’ in most of the cases - these tints sometimes can be felt less useful/’inferior’ even with relatively high CRI/Rf readings.
Look at any measurement, which is recently provided by maukka. You can see a graph on the right with Rf and Rg coordinates. You can see a triangular area, which has white background (or another with lighter gray) - this shows, that how much under/oversaturation (difference to 100 on the Rg axis) is possible for a given value on the Rf axis - of course, less under/oversaturation is tolerated, if one would go for a higher Rf value.
Now I hope, this makes sense to you
This is why I love that red dot in that graph (and the 99 color samples as well, but for a different reason).