Keynotes

Performance Scaling with Innovative Compute Architectures and FPGAs

Michaela Blott (Xilinx Research)

Performance scaling and power efficiency with traditional computing architectures becomes increasingly challenging as next generation technology nodes provide diminishing performance and energy benefits. FPGAs with their reconfigurable circuits can tailor the hardware to the application through customized arithmetic and innovative compute and memory architectures, thereby exposing further potential for performance scaling. This has stimulated significant interest for their exploitation in compute intensive applications. During this talk, we discuss some examples of these innovative customized compute architectures in the context of data processing and show how these unleash new levels of performance scalability and compute efficiency.

Michaela Blott is a Distinguished Engineer at Xilinx Research, where she is heading a team of international scientists, driving research into new application domains for Xilinx devices, such as machine learning, in both embedded and hyperscale deployments.

She graduated from the University of Kaiserslautern in Germany and brings over 25 years of experience in computer architecture, FPGA and board design, working in both research institutions (ETH Zurich and Bell Labs) as well as development organizations.

She is strongly involved with the international research community as technical co-chair of FPL’2018, workshop organizer (H2RC), industry advisor on numerous EU projects, and serves on numerous technical program committees (FPL, ISFPGA, DATE, etc.)

Fresh Thinking: New Researchers, Bright Ideas

A series of invited talks by younger researchers, including:

Dark silicon — a currency we do not control

Holger Pirk (Imperial College)

The breakdown of dennard scaling changed the game of processor design: no longer can the entire die be filled with "always-on" components -- some regions must be powered up and down at runtime to prevent the chip from overheating. Such "dim" or "dark" silicon is the new currency of chip design, raising the question: what functionality should be implemented in dark silicon? Viable candidates are any non-essential units that support important applications. Naturally, database researchers were quick to claim this resource, arguing that it should be used to implement instructions and primitives supporting database workloads.

In this talk, we argue that, due to economic constraints, such a design is unlikely to be implemented in mainstream server chips. Instead, chip designers will spend silicon on high-volume market segments such as AI, Security or Graphics/AR which require a different set of primitives. Consequently, database researchers need to find uses for the actual functionality of chips rather than wishing for features that are economically infeasible. Let us develop innovative ways to exploit the "hardware we have, not the hardware we wish to have at a later time". In the talk, we discuss examples of creative use of hardware for data management purposes such as TLBs for MVCC, Transactional Memory for statistics collection and hardware graphics shaders for data analytics. We also highlight some processor functionality that still calls for creative use such as many floating point instructions, integrated sound processors and some of the model-specific registers.

Holger Pirk is an assistant professor ("Lecturer" in traditional English terms) in the Department of Computing at Imperial College London. As such, he is a member of the Large-Scale Data and Systems Group.

He is interested in all things data: analytics, transactions, systems, algorithms, data structures, processing models and everything in between. While most of his work targets "traditional" relational databases, his declared goal is to broaden the applicability of data management techniques. This means targeting new platforms like GPUs or FPGAs but also new applications like compilers, games and AI.

Before joining Imperial, Holger was a Postdoc at the Database group at MIT CSAIL. He spent his PhD years in the Database Architectures group at CWI in Amsterdam resulting in a PhD from the University of Amsterdam in 2015.


Building real database systems on real persistent memory

Tianzheng Wang (Simon Fraser University)

"Real" persistent memory, such as Intel Optane DC PMM, offers high density, persistence and speed in between flash and DRAM. This changes the way we deal with storage devices in database systems - it is byte-addressable like memory, yet it is also persistent. Systems researchers have been keen in exploring its use since more than 10 years ago, to build persistent indexes, new file systems, persistent queues, faster logging and better replication approaches. Yet almost all previous work had to be done in simulated environments.

Now it is time to look back, rethink, and devise practical, innovative ways of exploiting real persistent memory in database systems. In this talk, we discuss our recent experience with real Optane DC PMMs and the implications and future roles of persistent memory in database systems. In particular, we highlight the challenges and issues that were not well understood in simulated environments, such as programming model and resource contention between DRAM and persistent memory.

Tianzheng Wang is an assistant professor in the School of Computing Science at Simon Fraser University in Canada (since Fall 2018). He works on the boundary between software and hardware to build better systems by fully utilizing the underlying hardware. His current research focuses on database systems and related systems areas that impact the design of database systems, such as operating systems, distributed systems, and synchronization. He is also interested in storage, mobile and embedded systems. Tianzheng Wang received his Ph.D. in computer science from the University of Toronto in 2017, advised by Ryan Johnson and Angela Demke Brown. Prior to joining Simon Fraser University, he spent one year (2017-2018) at Huawei Canada Research Centre (Toronto) as a research engineer.