AMD released the source code for XDNA-based NPUs

AMD XDNA

AMD finally released its XDNA driver on Linux

AMD has released news quite interesting when publishing the source code controller for your units with motor based architecture XDNA, which is designed to accelerate calculations related to machine learning and signal processing, known as NPUs (neural processing units).

This NPU, based on the

This series of processors were introduced last year and Ryzen 7040 “Phoenix” series APU, was the first to be equipped with Xilinx's IP-based XDNA architecture AI engine. This engine has the ability to accelerate machine learning frameworks like PyTorch and TensorFlow.

Initially, Ryzen AI support was limited to Windows. However, after a demonstration in June 2023, AMD officials asked the Linux community on GitHub for feedback on whether to support compatibility. Although the discussion thread was initially closed after three days, AMD reopened feedback collection in October and received a positive response from the Linux community, with over a thousand comments requesting support for Ryzen AI.

Now in response to requests from developers Linux community, AMD has officially released the open source XDNA driver for Linux.

Although AMD has not yet confirmed whether the relevant drivers will be merged into the mainline, this step represents a significant advancement for Linux users who want to take advantage of the capabilities of Ryzen AI on their systems. The release of the open source driver demonstrates AMD's commitment to the developer community and its willingness to respond to the needs and requests of Linux users.

El published code set includes:

  • Amdxdna.ko - A low-level driver for the Linux kernel that interacts directly with the XDNA hardware, allowing communication and control between the operating system and the NPU.
  • A runtime library known as the “xrt_plugin*-amdxdna” plugin, designed to take advantage of the XRT (Xilinx Runtime Library) interface. This library provides an interface that allows applications to interact with NPUs using the Xilinx runtime (XRT) library, and to access and use kernels running on the hardware accelerator efficiently.

The advantage of XRT lies in its ability to provide multiple levels of abstraction, making it easier to develop applications in a variety of programming languages. From low-level APIs for C/C++ to high-level bindings for Python, as well as built-in components to work with popular machine learning frameworks like TensorFlow, PyTorch, and Caffe.

Regarding the driver, it is mentioned that it requires to run AI applications:

  • Processor:
    • To run AI applications (test machine): RyzenAI processor, example: Phoenix/Strix
    • Any x86 processor to build the repository (AMD processor is recommended if possible)
  • Operating system: Ubuntu 22.04
  • Linux Kernel: You must have IOMMO SVA (Shared Virtual Addressing) supported by version 6.7 enabled.
  • XRT base package installed
    • To ensure that the base XRT package works with the plugin package, it is best to compile it from the xrt submodule in this repository (/xrt)

For interested in controller code, you should know that this is written in C and C++ and has been released under the GPLv2 license, which means that it is open source and allows the developer community to access, modify and distribute the software in accordance with the terms of the license. .

finally if you are interested in knowing more about it, as well as follow the detailed instructions for compiling the Kernel, I invite you to consult In the following link.


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: Miguel Ángel Gatón
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.