We have two main methods of regulating brightness for our high-power LED flashlights: PWM with a MOSFET or linear constant current regulation. Ideally from an efficiency standpoint we would use buck drivers, but high power efficient buck drivers are not readily available. It has been shown that constant current through the LED is more efficient at producing light than pulsed current, with equivalent average currents. This is because at higher current a type of non-radiative recombination called Auger recombination is more likely to occur. High temperature also reduces LED efficiency, but at equal average currents the LED temperatures using the two methods should be similar. See here and here for some efficiency measurements.
The purpose of this write-up is not to analyze the LED efficiency, but rather to analyze where the power is dissipated in the circuit in each method. For example, for a single XPL driven by a fully charged single Li ion cell, the direct drive current might be around 6A. Now say we want to reduce the average current to 3A to reduce the brightness. For each regulation method the power consumption is the same since at any given time the same amount of charge will have been used from the cell. But the power is dissipated at different parts of the circuit for the two methods.
The power can be dissipated in three general areas of the circuit. It can be dissipated in the LED. Some of the power dissipated in the LED will turn into light and some into heat. Power can also be dissipated in the resistance of the circuit as Joule-heating. This resistance includes the resistance of all of the wires and the cell, but not the driver itself. The third area of power dissipation is the driver. Power is dissipated in the driver in the linear regulation case but not the PWM case. I will approximate the FET in this case as a perfect switch with no losses.
In the linear regulation method, the driver itself dissipates heat. In the PWM method the driver dissipates no heat. So where does the power go? In the following analysis it will be shown that the power consumption in both cases is in fact equal, that in the PWM method additional power is dissipated in both the LED and circuit resistances, and that this excess power exactly balances the excess power that is dissipated in the linear driver.
In summary, for the PWM case more power is dissipated in the LED because of the higher LED forward voltage at the higher pulsed current. So even at the same average current more power is dissipated because of the larger voltage drop. More power is also dissipated in the circuit resistance because the power is proportional to the current squared. These extra power dissipations for the PWM case equal the extra power dissipated in the driver for the linear case, so that the total power consumption for each method is the same, for equal average currents.
Now an example using these XML2 forward voltage measurements. Lets say a fully charged cell (4.2V) with this XML2 direct drive results in 6A. This means the circuit resistance is 0.075 ohms. 4.2V - 6A(0.075 ohms)=3.75V, which is the forward voltage at 6A.
Now we want to reduce the average current to 3A where the LED forward voltage is 3.37V. With the linear method:
P (driver)=[(4.2V-3A(0.075ohms)–3.37V]3A= 1.815W
P (circuit)=(3A)^2(0.075ohms)=0.675W
P (LED)=3A(3.37V)=10.11W
With the PWM method, the duty cycle is 0.5:
P (circuit)=(0.5)(6A)^2(0.075ohms)=1.35W
P (LED)=(0.5)6A(3.75V)=11.25W
The total power for each case is the same at 12.6W. Again, note that the PWM method in this case will produce less light because of the reduced current-efficiency of the LED at higher pulsed currents.
So, there’s nothing here that will change how we do things, but it was something I was confused about and I think the result is interesting and satisfying.