Intel and subsidiary Altera have unveiled new chips and FPGAs optimized to bring AI capabilities to edge computing. These include the Intel Core Ultra processor, which promises up to five times better performance for image classification tasks than the previous (14th) generation.
The chip combines the Intel Arc GPU with a neural processing unit (NPU) in a simplified system-on-chip (SoC) configuration. The company reports that the chip is usable in retail, education, and manufacturing industries, among others. It enables generative AI-driven sales kiosks, intelligent cash registers, interactive whiteboards, and AI vision devices (cameras or sensors for production lines, for example).
Better performance and ability to scale up
In addition, the chipmaker is releasing new Intel Core processors for edge workloads and Intel Atom processors. The former is based on the older 13th-generation Intel Core desktop chips but promises more than 2.5 times better graphics performance. Because they have LGA sockets, they are also suitable for scaling up systems.
The less powerful Atom processors are optimized for networking, telecommunications, and other manufacturing scenarios. These chips can handle tasks such as AI-assisted threat detection (including zero-day threats) and quality control.
In addition, Intel is offering another new Arc GPU chip as an expansion for legacy Intel systems that could use an extra push to keep them up to speed when handling large amounts of graphics, media production, and AI inferencing.
Tip: AMD responds to ‘industrial mega-trends’ with powerful embedded chips
Programmable for specific workloads
Intel subsidiary Altera is releasing the new Agilex 5 SoC FPGAs (field-programmable gate arrays). These flexible, programmable chips are designed to support AI capabilities in edge devices without requiring a separate AI accelerator.
According to Intel, these FPGAs offer up to two times better performance than previous models. They are programmable for specific workloads with tools such as Quartus Prime software and the Intel OpenVINO toolkit.
Local processing of AI data
Intel’s approach to these chips is interesting because AI often involves processing large amounts of data in purpose-built data centers. Instead, in the cases the company describes, the data is processed ‘on premises’ or at least close to where it was generated.
The primary purpose of such edge computing is to reduce latency, or the delay between data generation and processing. All kinds of practical AI applications are possible by deploying AI workloads in this way.
Examples include industrial automation, self-driving vehicles, retail and promotional applications, medical monitoring and Internet of Things (IoT) devices.
Also read: Intel aims for developers to utilize 100 million AI PCs by 2025