Intel joins PyTorch Foundation as a ‘Premier’ member

Ryan Daws is a senior editor at TechForge Media, with a seasoned background spanning over a decade in tech journalism. His expertise lies in identifying the latest technological trends, dissecting complex topics, and weaving compelling narratives around the most cutting-edge developments. His articles and interviews with leading industry figures have gained him recognition as a key influencer by organisations such as Onalytica. Publications under his stewardship have since gained recognition from leading analyst houses like Forrester for their performance. Find him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


Intel has become a ‘Premier’ member of the PyTorch Foundation in a move aimed at propelling the advancement of AI.

PyTorch is a popular open-source framework that accelerates AI application development and facilitates experimentation that can lead to creative breakthroughs in the field. The framework was originally developed by Meta AI and is now part of the Linux Foundation.

Intel’s involvement with PyTorch dates back to 2018, with a clear vision to democratise AI access through widespread hardware availability and open software solutions. Aiming to establish an “AI Everywhere” future, Intel’s efforts have centred on enhancing PyTorch’s capabilities and its ecosystem, thus enabling a landscape where AI innovation thrives.

One of Intel’s major contributions to PyTorch is its comprehensive optimisations for x86 architecture. These optimisations encompass a variety of enhancements, such as leveraging the Intel oneAPI Deep Neural Network Library (oneDNN), optimisations for aten operators, support for BFloat16, and auto-mixed precision. These advancements have been crucial in making PyTorch more efficient and capable on Intel-powered hardware.

In the journey towards PyTorch 2.0, Intel has also delivered several substantial contributions. These contributions include optimised CPU FP32 inference, improved performance for Graph Neural Networks (GNNs), optimised int8 inference with unified quantisation, and the utilisation of the oneDNN Graph API for accelerated inference on CPUs.

Intel’s engagement with the PyTorch community is reinforced by the presence of four dedicated PyTorch maintainers. These individuals play a pivotal role in maintaining CPU performance modules and the compiler front-end. Their proactive involvement in addressing issues, reviewing pull requests, and contributing to PyTorch’s development has been instrumental in driving progress.

The company’s efforts also extend beyond its contributions to PyTorch core.

Intel actively collaborates with the PyTorch community through activities such as triaging GitHub issues, enhancing documentation, organising meetups and workshops, publishing technical content, and releasing Intel Extension for PyTorch—a platform that provides users early access to Intel’s software and hardware optimisations for PyTorch.

In alignment with its commitment to open-source AI initiatives, Intel recently joined the Linux Foundation AI & Data Foundation as a Premier member. By participating in the Governing Board, Intel aims to leverage its expertise to shape the strategic direction of the foundation’s AI and data endeavours.

With Intel’s extensive contributions and dedication, the PyTorch ecosystem is poised to flourish—paving the way for innovative AI applications that benefit the entire industry and society at large.

(Image Credit: PyTorch Foundation)

See also: Bing Chat will be available to third-party browser developers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *