2 min Applications

Meta’s vendor-agnostic toolkit helps optimize AI for various CPUs

Meta’s vendor-agnostic toolkit helps optimize AI for various CPUs

Facebook parent Meta announced the launch of a brand-new set of free development tools for artificial intelligence apps, which help developers optimize AI apps for multiple processors.

According to the company, the new open-source AI platform is built on PyTorch, an open-source machine learning framework that can boost code to run up to twelve times faster on Nvidia’s flagship A100 processor and up to four times faster on AMD’s MI250 chip. The software’s versatility is as significant as the performance improvement, Meta stated in a blog post.

Software has emerged as a critical battlefield for chipmakers attempting to cultivate an ecosystem that entices developers to adopt their chips. Thus far, Nvidia’s CUDA system has been the most widely used for artificial intelligence development.

However, once developers customize their code for Nvidia processors, running it on competitors’ GPUs can be inefficient. According to Meta, its new open-source AI platform allows users to shift between processors without becoming locked in.

Meta wrote in a blog post that the unified GPU back-end support provides deep learning developers with more hardware vendor options while incurring fewer migration expenses. Requests for comment from Nvidia and AMD were not immediately answered.

Preventing vendor lock-in

Meta’s software is designed for an AI process known as inference, which occurs when machine learning algorithms that have already trained on massive quantities of data are asked to make rapid decisions, such as determining if an image is picturing a cat or a dog.

According to MLCommons, an independent organization that assesses AI speed, Meta’s multi-platform software project benefits consumer choice. MLCommons further stated that the new platform demonstrates the importance of software, particularly when it comes to deploying neural networks in machine learning for inference.

Tip: Google Cloud optimizes Xeon Scalable VM workloads for free