Intel unveiled its latest generation central processing units for servers with multiple sockets. They also unveiled a new accelerator chip, which comes with an artificial intelligence module that they collaborated on with Microsoft.
They named the new series CPUs, Cooper Lake. There are 11 in the portfolio, and they are all designed to power servers with four or eight processors. That’s a small niche of machines usually used for high-performance workloads with specialized functions, like in-memory analytics.
High socket counts are usually great for such workloads because if you have more CPUs in a system, the more memory you can handle. Intel says that four Cooper Lake CPUs can give you 1.9 times the performance of a five-year-old machine of comparable specs.
The specs are awe-inspiring.
The Xeon Platinum 8380HL is the faster processor in this series. It has 28 cores with a base frequency of 2.9GHz, and the price tag is set at $13,012. A look under the hood reveals that Cooper Lake has made several technical leaps. The CPUs can accommodate up to 4.5 terabytes of memory, depending on the model.
They also support two memory variants that are inarguably faster. These include the DDR4-3200 and Optane Barlow Pass, which has up to 9.1% and 25% better data access speeds than what was possible before, respectively.
AI in chips equal better performance
Intel has improved the performance of CPUs in a multiprocessor server. In the inter-chip connections that link Cooper Lake CPUs together in the server, users can now carry 20.8 terabits of data per second. That’s twice as much as the previous generations of processors.
The Stratix 10 NX FPGA running an AI model the size of BERT at batch size 1, can give you 2.3 times better computing performance than NVIDIA V100. The role of AI in Intel’s chips is unmistakably changing the landscape.