2 min

Together, Akamai and Neural Magic are trying to democratize AI workloads. Given the shortages of prohibitively expensive GPUs, the option of running AI on Akamai infrastructure may be more attractive.

The combined solution allows deep learning computations to take place on server CPUs. Neural Magic, which specializes in “software-delivered AI,” thus ensures that existing Akamai infrastructure can be used for this application.

CPUs useful for AI

CPUs are significantly slower than GPUs when it comes to most AI calculations, with GenAI being a fashionable example. The thousands to tens of thousands of cores in a single graphics card round out the myriad AI calculations faster than the at most 288 e-cores in the new Intel Xeon Sierra Forest CPU intended for data centers. Conventional processors are at most suitable for inferencing, the last (and least demanding) computational step before an AI model can generate an output. Neural Magic’s solution can leverage these resources for this purpose, allowing edge servers to perform AI acceleration as well.

Thus, the emphasis of Akamai and Neural Magic’s announcement is on efficiency, although performance would not be left behind. The wide availability of Akamai Connected Cloud should ensure that customers worldwide can take advantage of this offering. Latency should be very low due to the distributed nature of Akamai’s infrastructure.

According to John O’Hara, SVP Engineering and COO at Neural Magic, running AI on the edge is more challenging than it seems. “Specialized or expensive hardware and associated power and delivery requirements are not always available or feasible, leaving organizations to effectively miss out on leveraging the benefits of running AI inference at the edge.” Akamai Chief Strategist Ramanath Iyer added that the collaboration gives organizations “much-needed cost efficiencies and higher performance.”

Also read: Akamai adds AI protection against DDoS attacks