4 min

Tags in this article

,

From 4 cores and 8 threads to 56 cores and 112 threads: Intel launches a broad portfolio of 58 new Xeon processors under the Xeon Scalable flag. In theory, just about every professional workload is covered by the offer.

In the second half of 2017, Intel restructured its entire Xeon portfolio with the introduction of Xeon Scalable. The E-labels were replaced by a unified range of 52 new processors, distinguished from each other by type numbers and labels such as Bronze, Silver, Gold and Platinum. The idea: to have the right Xeon in the range for every workload, from workstation to HPC server.

112 threads

Almost two years later today, Intel decided it was time for an upgrade. The entire Scalable family will be upgraded. The second generation of Intel Xeon Scalable contains 58 new chips that are even more diverse than their predecessors. The most powerful chip from 2017, the Xeon Platinum 8180, had only 28 cores (56 threads) with a basic clock speed of 2.5 GHz. The most advanced chip today gets a strong 56 cores, good for 112 threads, and does not compromise on clock speed. On the contrary, Intel claims a base frequency of 2.6 GHz for the Xeon Platinum 9282. This makes this monster on paper at least twice as powerful as its predecessor.

The server chip is baked on the 14 nm Cascade Lake architecture, just like the rest of the extended line-up. Intel provides the processor with the necessary data via an unseen 77 MB L2 cache. The TDP for this chip is obviously not tender with 400 watts.

To reach the crazy amount of computing cores, Intel started cheating on AMD Threadripper. Under the bonnet, the processor contains two chips with 28 computer cores that work together. Intel gives this series of type numbers that start with 9. These Xeons will not be available separately, but only integrated in a motherboard. The rest of the new Xeon Scalables are available separately, with the most advanced part, the Xeon Platinum 8280, containing 28 cores, just like two years ago.

Renewed line-up

The entire Xeon Scalable lineup is now compatible with 2933 MHz DDR4-RAM memory. Intel further expands the AVX-12 instruction set with additional capabilities focused on machine learning. Deep Learning Boost calls Intel that feature. Some other hardware upgrades will then have to solve the necessary Spectre and Meltdown problems linked to speculative execution. Most of the chips are available immediately, although a handful of models will follow a little later.

Precious metals indicate how advanced Intel finds a component, with Platinum naturally at the top of the lineup and Bronze at the bottom. Letters then suggest what type of workload Intel considers the processors suitable for. N, V and S show that chips are suitable for network functions, virtualisation and search respectively. T-chips, on the other hand, can be the source of high thermal challenges for a long time, while Y-labelled chips have a new speed select function on board. This allows the chip to assign thermal margin calculator cores to workloads that require a thread that is as high as possible.

Optane and Xeon D

A final notable innovation is the compatibility with Optane 3D Xpoint memory, which is not volatile. That should allow for a new type of memory-centric workloads.

In addition to the Scalable lineup, Intel also offers Xeon D. Optimised for edge computing, the new Xeon D-1600 SoC is fully up to speed in environments where there is less thermal margin and less power available. Intel sees the chip as essential in the rollout of tomorrow’s infrastructure, where 5G will play an important role.

Beware of AMD

Intel launches its Scalable offensive in front of AMD, which is soon expected to launch the next generation of its Epyc data center chips. Unlike Intel hardware, these Epyc Rome chips will be baked at 10nm, although Intel itself believes that the difference between TSMC’s 10nm process and Intel’s proprietary 14nm baking process in terms of format benefits is not so great.

Intel’s offering is extensive and extremely powerful. Whether it will suffice to maintain the chip manufacturer’s monopoly position in the data centre remains to be seen. Analysts predict that AMD will very soon have 10 percent of the market share with Epyc, whereas at the launch of the new chips at the end of 2017 it was barely 0.8 percent.

Related: AMD challenges Intel with 7nm Epyc demonstration

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.