The open source machine learning (ML) framework, PyTorch, has achieved significant progress with its latest release, PyTorch 2.1. In addition, PyTorch has introduced two new groundbreaking projects – PyTorch Edge and ExecuTorch. These developments were announced at the PyTorch Conference, coinciding with the one-year anniversary of the PyTorch Foundation’s formation. The conference shed light on the remarkable advancements and milestones that PyTorch has achieved over the past year.
Historically, PyTorch has primarily been utilized for training AI models, including renowned large language models (LLMs) such as GPT from OpenAI. However, PyTorch is now expanding its horizons by venturing into the realm of inference. PyTorch 2.1 includes several features and enhancements aimed at improving inference operations for server deployments. Notably, IBM has contributed significantly to these efforts, introducing performance enhancements and support for automatic dynamic shapes. These developments minimize recompilations due to tensor shape changes and accelerate common numerical calculations for data science by translating NumPy operations into PyTorch.
One of PyTorch’s groundbreaking projects is PyTorch Edge, which focuses on enabling AI inference at the edge and on mobile devices. PyTorch Edge aims to address the challenges posed by restricted edge devices by providing an end-to-end workflow for deploying AI models. By optimizing native programs, PyTorch Edge ensures compatibility with various edge devices, including mobile, AR/VR headsets, wearables, embedded systems, and microcontrollers. This project opens up new possibilities for deploying AI models in a portable and optimized manner across a wide array of edge devices.
ExecuTorch is a technology developed by Meta Platforms, formerly known as Facebook, and it is now open source as part of the PyTorch Foundation. Meta has already utilized ExecuTorch to power its latest generation of Ray-Ban smart glasses and the Quest 3 VR headset. ExecuTorch offers an end-to-end solution for deploying AI models on mobile and edge devices. The workflow begins with a standard PyTorch module, which is then converted into an exporter graph, optimized through transformations and compilations, and tailored for specific devices. This portability and optimization enable seamless execution of AI models across various devices. ExecuTorch also provides consistent APIs and software development kits, improving developer productivity and reducing fragmentation in AI model deployment.
Paving the Way for Collaborative Innovation
By open sourcing ExecuTorch, Meta Platforms aims to foster collaboration within the industry for addressing challenges in deploying AI models on edge devices. Meta believes that ExecuTorch’s optimized and portable workflow will empower organizations to leverage on-device AI more effectively. The decision to open source ExecuTorch reflects Meta’s commitment to community engagement and obtaining valuable feedback to further enhance the technology. With ExecuTorch becoming a part of the PyTorch Foundation, it opens up opportunities for joint efforts in driving advancements and addressing fragmentation in the deployment of AI models.
PyTorch’s latest release and new projects demonstrate the framework’s commitment to pushing the boundaries of AI inference capabilities. With PyTorch 2.1, training and inference operations have been significantly optimized. Additionally, PyTorch Edge and ExecuTorch offer unique solutions for deploying AI models on edge devices and enable on-device inference at its finest. By embracing open source and community collaboration, PyTorch is poised to accelerate innovation and address the challenges posed by the wide range of edge devices. The future of AI inference looks promising with PyTorch leading the way towards optimized, portable, and efficient deployment of AI models.