I have an idea. But need your input.

Okay this is the WT90 flashlight.
Runtime test with Samsung 50S, 21700, 5000mAh, 25 CDR, that are year and a half old, 120 Cycles and a capacity just checked yesterday of 3,845 milliamp hours.NEW they are around 4950mAh.
The other battery is called a Tenpower 50XG Tabless cell, 5000mAh, 50 CDR. Brand new.Tested at 4865mAh… just over a 1000mAh more than the 50s.

Mooch tested and confirmed 50 CDR. For the 50XG

15/15/ 10 minutes totaling 40 minutes with 5 minutes rest in between increments.

50S= 3.578v after 40 min.
50XG=3.502v after 40 min.

Not a tremendous difference. But considering the huge advantage in milliamp hours from an older cell and a brand new cell.And the 50XG has double the CDR.50 to 25.

How did that happen?

My only thought is more amps = more output = more drainage of battery?

Thoughts from the experts please?:thinking:

Would it be better to test capacity after your test? Cell Voltage is only loosely related to State of Charge.

2 Thanks

The WT90 uses a linear + FET driver.

In direct drive mode, the lower resistance battery will allow the emitter to draw more current, increasing power draw and emitter output.

Therefore, in a direct drive light, I’d expect the Tenpower 50XG to have higher output for a shorter amount of time vs the 50S.

4 Thanks

I don’t know. I doubt very much if the capacity is going to be any different for either cell after 1 test.
. It’s just surprising to me that a cell with more than a thousand milliamps has a lower voltage at the end of the test. Again, the only thing I could think of is more amps equals more output equals slightly less run time/ voltage.

Thanks. That makes sense. Sort of what I said but not with the detail and the knowledge that you explained.

1 Thank

Were it me, that is what I would do. You just can’t draw many conclusions based on a voltage check by itself. Then, of course, there is the part that @BlueSwordM mentioned.

3 Thanks

We all have our opinion. All I needed to hear is what blue sword said that’s the answer I don’t know a lot about drivers and all that stuff but I basically said the same thing. Not as eloquently and scientific as he did!

Personally a capacity test before using a battery is more important than after using it unless there’s a drastic drastic difference in the results. Which there wasn’t. In my opinion he explained it perfectly and it made sense. No reason for a capacity test.

No problem. But you should know that measuring the voltage of a LiIon cell won’t tell you much about its state of charge. So it’s not a good metric for comparing cell performance.

Of course If you feel that your question has been answered, all is good.

3 Thanks

I don’t think the suggestion is about testing the overall capacity of the cell after this one test but checking how many mAh will fit into the cells if charged to full from the voltages You see now.

Something like calculating fuel consumption by filling the tank to the brim, driving some distance, topping it up again and seeing how much was used in that drive that way

1 Thank

Yes. More specifically. Test the full capacity of each cell first. Run the runtime test. Discharge the cells to find the remaining capacity after the test. Subtract that from the number found in the original capacity test. The result would be the amount of mAh used in the runtime test. This gives a better idea of what was going on. A little math could give Watt hours used, which would be even more interesting.

That should give the same results just calculated from “opposite ends of capacity” I think, right? Whether running a discharge test of remaining charge and subtracting the result from measured capacity or charging it back up and measuring how much was needed to reach full charge again should give the same number, just one is measured directly (how much can be charged back in, so how much was consumed) and other indirectly (how much was remaining to calculate how much must have been consumed)

Not necessarily. Charging a cell always involves losses. Especially due to heat required in the chemical conversion process. Also, if your charger can display voltage after termination, try looking at cell voltage at termination. Then check in half an hour. Then after a few more hours. It will always be at a lower voltage as time passes (to a certain point). Then when you drain that cell in your test from the lower resting voltage (maybe 4.17 volts or many times less) and charge it back to 4.2, it will take more energy than what you actually used in the test. It is more accurate to measure capacity using a charge/discharge cycle that stops at a set voltage on both ends. I always measure capacity on discharge. regardless of whether it is a full capacity test or a capacity remaining test. Which is why I suggested doing it that way. It takes out many of the variables.

1 Thank

Didn’t even think of the losses, Your method should be more precise indeed.

However don’t the chargers account for the losses in some way? And as for the voltage drop when taken off charger, does that happen when You leave the cells trickle charging at the end for a while? Genuinely curious, not trying to argue :slight_smile:

If a charger for Lithium Ion cells trickle charges it is defective. The trickle is used for NiMH cells to top them off, but should never be used for LiIon cells. As to the charger sort of compensating for the losses involved in the charge cycle, I don’t know of any that do nor how one would write an algorithm that would accurately account for it when measuring capacity when only using the charge cycle.

1 Thank

Besides mah, there is such thing as internal resistance, cells with higher resistance will sag more, and discharge differently than cells with lower resistance even with same rated capacity.