Keynotes

AMD Vitis™ High-Level Synthesis (HLS) Tool: Principles and Evolution

by Alain Darte (AMD)

The AMD Vitis™ High-Level Synthesis (HLS) tool enables FPGA development at a higher level of abstraction—leveraging optimized libraries, C++ programming, and C-based test benches for early validation, debugging, and functional verification. This presentation will share some of the core principles of Vitis HLS, its programming model to synthesize C++ code into high-throughput and resource-efficient hardware designs, and how these principles made possible the evolution of Vitis HLS into a full compiler including both fine user control over performance & resources and automated performance optimization support. 


Alain Darte

Alain Darte is principal architect for the AMD high-level synthesis tool Vitis HLS. He joined the Xilinx HLS team in 2017 after more than 25 years as an academic researcher at CNRS (French Council for Research) in the field of program analysis and optimizations for high-performance computing and the synthesis of hardware accelerators. With 90+ papers, his main contributions are in parallelism detection algorithms, loop optimizations, memory reuse, static single assignment (SSA). In Vitis HLS, he is the main re-architect of the so-called dataflow optimizations, and in charge of throughput optimizations and of the front-end of the tool. He holds a PhD in computer science from Ecole Normale Supérieure in Lyon.

The Data Center of the Future:  Disaggregated, Serverless and Heterogeneous

by Miriam Leeser (Northeastern University)

Data centers, cloud computing setups and High Performance Computing (HPC) increasingly rely on a diversity of accelerators to process 21st century applications.  These applications process large amounts of data at high speed, and require innovative configurations to deliver this data to increasingly parallel computation.   The solution  to address this problem is disaggregated computing where accelerators, memories and other devices are directly connected to the network.  Disaggregation allows users to flexibly acquire and allocate the computing resources they need.   Connecting  hardware directly to the network removes the overhead and delay associated with host intervention.  Disaggregated computation can  be supported in software with a serverless computing model, that provides an abstraction for decomposing applications to map onto disaggregated hardware.  In this talk,  I will focus on network-attached FPGAs available in the Open Cloud Testbed (OCT) https://octestbed.org/.   Current hardware available in OCT includes AMD Alveo U280s, Versal VCK500s and Versal V70s with Versal V80s being added soon.    I will discuss the hardware model and programming paradigm of accelerators in disaggregated computing, as well as applications that benefit from hardware directly connected to the network.  I will also discuss challenges and issues that arise from this model.  Specifically I will talk about applications that make use of disaggregation, services such as RDMA, and challenges especially with regards to security.   The Open Cloud Testbed allocates bare metal nodes that are isolated from the public network.  This setup  allows users to conduct a range of experiments including launching security attacks and testing solutions .


Miriam Leeser

Miriam Leeser has been designing with FPGAs for decades.  She is a Professor of Computer Engineering at Northeastern University and head of the Reconfigurable Computing Laboratory.  She has conducted research in floating point implementations, unsupervised learning, medical imaging, and privacy preserving data processing.  Throughout her career, she has been funded by both government agencies and companies, including AMD, DARPA, NSF, Google, and MathWorks. She is a senior member of ACM and a senior member of the Society of Women Engineers (SWE).   She is the recipient of a Fulbright Scholar Award and she is a Charter Member of the IEEE Computer Society Distinguished Contributor Recognition Program.   Her current research focus is on FPGAs for wireless communications as well as FPGAs in the data center.

Fantastic Arithmetic Beasts and where to find them

by Florent de Dinechin (INSA-Lyon) and Bogdan Pasca (Intel)

If an application does not compute accurately enough, somebody will complain. Conversely, as soon as an application computes accurately enough, chances are that it computes too accurately. Nobody will complain, but maybe they should? Computing too accurately is a waste of resources and power. We tend to accept it, because microprocessors have trained us to design our applications around the handful of operators they offer. In FPGAs, conversely, we have the opportunity to design the arithmetic for the application. Can we dream of applications that compute just right (accurately enough, but not too accurately)? This has been the goal of the FloPoCo project for 15 years. The main challenge (but also a good thing in terms of publication potential) is that there is an infinite number of such application-specific operators — we will demonstrate a few of them. How can we ensure that each of these operators will be optimized for applications that we do not know yet? This talk will review some of the methodologies and tools that have been developed in FloPoCo for this purpose, and conclude with the challenges and opportunities of High-Level Synthesis in this context.


Florent de Dinechin

Florent de Dinechin is a professor at INSA-Lyon, where he teaches computer architecture and compilers. His main research interests are computer arithmetic, FPGAs, and elementary functions. He is the main architect of the FloPoCo project. He is a co-author of the “Handbook of Floating-Point Arithmetic”, and in 2024 he published with Martin Kumm the book “Application-Specific Arithmetic: Computing Just Right for the Reconfigurable Computer and the Dark Silicon Era”.

Bogdan Pasca

Bogdan Pasca is a Principal Engineer at Intel were he focuses on architecting arithmetic operators for FPGAs. He joined the Altera European Technology Center in 2011, shortly after defending his PhD from Ecole Normale Superieure de Lyon (where he was an active contributor to the FloPoCo project). His main research activity focuses on the design of highly optimized floating-point operators for FPGAs, which are made available and used throughout most Intel FPGA toolflows. 

Scroll to Top