The company BitEnergy AI has found a way to reduce the energy consumption of AI applications significantly.
The method developed addresses a significant challenge of modern technology. Large language models require a lot of computing power, resulting in high energy consumption. By comparison, ChatGPT’s daily power consumption is equivalent to that of tens of thousands of households. And that concerns only one application. With the increasing popularity of AI, there is potentially a huge additional power requirement in the future.
New algorithm
How does BitEnergy AI manage to reduce this power consumption? Usually, computations use floating-point tensor multiplication, multiplying numbers in a floating-point format. With FPM, applications can perform complex calculations with high precision, but this process consumes much power.
BitEnergy AI, however, proposes using the new linear-complexity multiplication L-Mul algorithm. This approaches the traditional floating-point multiplication process with an integer adder. This modification would have minimal impact on the precision and accuracy of the calculations, but with the major advantage that it consumes much less energy.
However, the question is whether this technique can be implemented in the short term. To use it requires different hardware than what AI applications currently use.
Tip: Energy hungry data centers in AI era puts nuclear power back on agenda