Here is why the CRI drops when mixing different LED tints.
The 100% curves for the CRI are taken from the perfect black-body radiation at different (color) temperatures. Each curve has a different maximum point and slightly different shape. 5000 K has its maximum at 580 nm and 2700 K at 1070 nm.
(Discaimer: The CRI uses black-body curves only for a CCT below 5000 K. The ‘Illuminant D’ data used for higher CCTs are too ugly to show here.)
After standardizing the curves (divide all values by the maximum value, so maximum is always 1; easier to compare) and zooming into the area of visible light you get this:
For each wavelength your LED has to get near the curve to get a good calculated CRI value. (These curves are the blue ones in maukka’s diagrams.)
The curves look relatively similar to each other, just shifted left and right so each shows a different part of itself in the visible light range.
Now lets mix a high CRI 2700 K and a 5700 K LED.
Their spectra almost fit the 2700 K and the 5700 K curve. You mix their tints by summing up the values of the two curves for each wavelength. For the same power output on both LEDs that is the dashed line.
If you make one LED brighter and the other one dimmer the curve will only change slightly. Only the maximum and the slope will get closer to the curve of the stronger LED.
See the dotted 0.4/0.6 and the slash-dotted 0.8/0.2 curves for some mixtures. The slash-dotted line would not be too bad for 3500 K but with too much blue.
I could not mimic the shape and position of the 4200 K curve.
The more the mixed tint gets away from one LED tint the worse the mixture gets. Without a curve similar to the 100% CRI curve the calculated CRI value will be bad.
To really change the color temperature of an LED you would have to shift the wavelength, not just modify the amplitude.
The penalty in CRI will be lower with less distance between the tint of the LEDs.