We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

AI Inference Acceleration

  • Lowest Latency AI Inference
  • Accelerate your whole application
  • Match the Speed of Innovation

Xilinx Data Center AI Platform

For low-latency AI Inference, Xilinx delivers the highest throughput at the lowest latency. In standard benchmark tests on GoogleNet V1, The Xilinx Alveo U250 platform delivers more than 4x the throughput of the fastest existing GPU for real-time inference. Learn more in the whitepaper: Accelerating DNNs with Xilinx Alveo Accelerator Cards

Xilinx Edge AI Platform

AI Inference performance leadership with CNN pruning technology.

  • 5X to 50X network performance optimization
  • Increase FPS and reduces power

Optimization/Acceleration Compiler Tools

  • Supports networks from Tensorflow, Caffe, and MXNet
  • Compiles networks to optimized Xilinx Edge runtime

Lowest Latency AI Inference

High Throughput OR Low Latency

Achieves throughput using high-batch size. Must wait for all inputs to be ready before processing, resulting in high latency.

High Throughput AND Low Latency

Achieves throughput using low-batch size. Processes each input as soon as it’s ready, resulting in low latency.

Whole App Acceleration

Optimized hardware acceleration of both AI inference and other performance-critical functions by tightly coupling custom accelerators into a dynamic architecture silicon device.

This delivers end-to-end application performance that is significantly greater than a fixed-architecture AI accelerator like a GPU; because with a GPU, the other performance-critical functions of the application must still run in software, without the performance or efficiency of custom hardware acceleration.

Match the Speed of AI Innovation

AI Models Are Rapidly Evolving

Adaptable silicon allows Domain-Specific Architectures (DSAs) to be updated,
optimizing the latest AI models without needing new silicon

Fixed silicon devices are not optimized for the latest models due to long development cycles


Edge Developers

The Xilinx Edge AI Platform is available on Xilinx Zynq SoC and MPSoC Edge cards.

Learn More

Data Center Developers

The Xilinx Data Center AI Platform is available on a variety of Platforms including Xilinx Alveo accelerator cards and the Amazon AWS F1 FPGA instance.

Learn More

Xilinx University Program

Enabling the use of Xilinx technologies for academic teaching and research

Learn More
Page Bookmarked