Meta just released PyTorch 2.0, the latest version of its open-source machine learning framework.

The release aims to increase performance speed while maintaining the framework’s eager-mode development and user experience. PyTorch 2.0 includes support for Dynamic Shapes and Distributed, among other new features.

Additionally, PyTorch 2.0 includes ‘torch.compile’, a new feature that enhances PyTorch efficiency and kickstarts the migration of certain PyTorch functionality from C++ to Python.

What’s new?

The upgrade contains several new technologies, including:

  • TorchDynamo, which uses Python Frame Evaluation Hooks to capture PyTorch scripts securely.
  • AOTAutograd leverages PyTorch’s autograd engine as a tracing autodiff for creating ahead-of-time backward traces.
  • Developers can use PrimTorch to canonicalize more than 2000 PyTorch operators down to 250 simple ones that can be used to create a full PyTorch backend.
  • A deep learning compiler called TorchInductor creates code for various accelerators and backends.

“PyTorch 2.0 embodies the future of deep learning frameworks,” said Luca Antiga, CTO of grid.ai and one of the primary maintainers of PyTorch Lightning.

“The possibility to capture a PyTorch program with effectively no user intervention and get massive on-device speedups and program manipulation out of the box unlocks a whole new dimension for AI developers.”

Getting started

The official blog post includes technical requirements, user experience, tutorials, and FAQs. Furthermore, there’s a comprehensive introduction and technical overview in the Get Started menu.

The company also launched a new ‘Ask the Engineers: 2.0 Live Q&A’ series that allows users to dive into topics with experts.

Tip: Meta hands over PyTorch to PyTorch Foundation