Program‎ > ‎

Keynotes

Shahar Kvatinsky, Technion

Real Processing-in-Memory  with Memristive Memory Processing Unit


Abstract: Computers have been built for many years in a structure where data is processed and stored using separate units – the processor and the memory. However, emerging applications such as artificial intelligence and internet-of- things require ample amount of data to be processed from numerous origins. This forces enormous data movement that becomes the main limitation in modern computing systems. Not only that the speed of computers is limited by this data movement, but also the energy consumption is mostly because of this transfer rather than the computation itself.

An attractive approach to alleviate the data movement problem is to process data inside the memory. Unfortunately, contemporary memory technologies are ill-suited for such approach. Memristive technologies are attractive candidates to replace conventional memory technologies, and can also be used to perform logic and arithmetic operations. Combining data storage and computation in the memory array enables a novel computer architecture, where both operations are performed within a memristive Memory Processing Unit (mMPU). mMPU relies on adding computing capabilities to the memristive memory cells without changing the basic memory array structure, and by that overcomes the primary restriction on performance and energy in computers today.

This talk focuses on the various aspects of mMPU. I will discuss its architecture and implications on the computing system and software, as well as examining the microarchitectural aspects. I will show how to design the mMPU controller and how different sequence of computing operations in an mMPU can be automatically optimized as sequences of basic Memristor Aided Logic (MAGIC) NOR and NOT operations. Then, I will present examples of applications that can benefit from processing within memristive memory and show how adding mMPU to conventional computing systems substantially improves the system performance and energy.

Bio: Shahar Kvatinsky is an assistant professor at the Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion – Israel Institute of Technology. He received the B.Sc. degree in computer engineering and applied physics and an MBA degree in 2009 and 2010, respectively, both from the Hebrew University of Jerusalem, and the Ph.D. degree in electrical engineering from the Technion – Israel Institute of Technology in 2014. From 2006 to 2009 he was with Intel as a circuit designer and was a post-doctoral research fellow at Stanford University from 2014 to 2015. Kvatinsky is an editor in Microelectronics Journal and has been the recipient of the 2015 IEEE Guillemin-Cauer Best Paper Award, 2015 Best Paper of Computer Architecture Letters, Viterbi Fellowship, Jacobs Fellowship, ERC starting grant, the 2017 Pazy Memorial Award, the 2014 and 2017 Hershel Rich Technion Innovation Awards, 2013 Sanford Kaplan Prize for Creative Management in High Tech, 2010 Benin prize, and six Technion excellence teaching awards. His current research is focused on circuits and architectures with emerging memory technologies and design of energy efficient architectures.



Gustavo Alonso, ETH

The impact of modern hardware on system design

Abstract:  Computing Systems are undergoing a multitude of interesting changes: from the platforms (cloud, appliances) to the workloads, data types, and operations (big data, machine learning). Many of these changes are driven or being tackled through innovation in hardware even to the point of having fully specialized designs for particular applications. In this talk I will review some of the most important changes happening in hardware and discuss how they affect system design as well as the opportunities they create. I will focus on data processing as an example but also discuss applications in other areas.

Bio: Gustavo Alonso is a Professor of Computer Science at ETH Zürich. He  studied telecommunications -electrical engineering- at the Madrid Technical University (ETSIT, Politecnica de Madrid). As a Fulbright scholar, he completed an M.S. and Ph.D. in Computer Science at UC Santa Barbara. After graduating from Santa Barbara, he worked at the IBM Almaden Research Center before joining ETHZ. At ETHZ, he is part of the Systems Group (www.systems.ethz.ch). Gustavo is a Fellow of the ACM and of the IEEE.

Gustavo’s research interest encompass almost all aspects of systems, from design to run time. He works on distributed systems, data processing, and system aspects of programming languages. Most of his research these days is related to multi-core architectures, data centers, FPGAs, and hardware acceleration, mainly working on adapting traditional system software (OS, database, middleware) to modern hardware platforms.



Derek Murray, Google

Optimizing TensorFlow for Multi-core and Heterogeneous Architectures


Abstract: TensorFlow is an open-source machine learning system, originally developed by the Google Brain team, which operates at large scale and in heterogeneous environments. TensorFlow trains and executes a wide variety of machine learning models at Google, including deep neural networks for image recognition and  machine translation. It uses dataflow graphs to represent stateful computations, and achieves high performance by mapping these graphs across clusters of machines containing multi-core CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). In this talk, I will present how we designed and implemented TensorFlow, with particular focus on how it makes efficient use of these diverse platforms. I will also discuss opportunities for future systems research in machine learning infrastructure.

Bio: Derek Murray is a Staff Software Engineer in the Google Brain team, working on TensorFlow. Previously, he was a researcher at Microsoft Research Silicon Valley where he primarily worked on the Naiad project, and he received his PhD in Computer Science from the University of Cambridge.