Speaker: Chiranjib Bhattacharyya, Indian Institute of Science Bangalore, India
Talk Title: Learning without Labels
Abstract: Artificial Intelligence (AI) based systems have made rapid progress in the last decade leading to revolutionary changes in several disciplines such as Medical Imaging, Autonomous Driving etc. However, most of today's AI systems are largely based on Supervised learning wherein the underlying machines are trained by inputs labelled by humans. The ability to learn from the environment without labels, often called Unsupervised Learning, is now considered as the next big challenge in AI. In this talk, we will focus on the foundations of Unsupervised Learning where many fundamental challenges remain. Some are statistical in nature, such as Model Complexity and Sample complexity, while some are algorithmic, including the challenge of provably learning the parameters of the model from finite amount of data in polynomial time. We will present several recent results, derived from ideas drawn from the disciplines of Computational Geometry and Statistical Mechanics, which furthers the theory behind several un-supervised learning models.
Bio: Dr. Chiranjib Bhattacharyya is currently a Professor and Chair of the Department of Computer Science and Automation (CSA) at the Indian Institute of Science (IISc), Bangalore. His research interests are in the foundations of Machine Learning, Optimisation and their applications to Industrial problems. He has authored numerous papers in leading journals and conferences in Machine Learning. Some of his results have won best paper awards. He joined the CSA Department at IISc in 2002 as an Assistant Professor. Prior to joining the Department, he was a postdoctoral fellow at UC Berkeley. He holds BE and ME degrees, both in Electrical Engineering, from Jadavpur University and IISc, respectively, and completed his PhD from the Department of Computer Science and Automation at IISc. He is a fellow of the Indian Academy of Engineering and Indian Academy of Sciences. For more information about his work, please visit http://mllab.csa.iisc.ac.in.
Speaker: Rupak Biswas, NASA Ames Research Center, California, USA
Talk Title: Advanced Computing for NASA Science and Engineering
Abstract: High-fidelity modeling, simulation, and analysis, enabled by supercomputing, are critical to NASA’s missions in space exploration, aeronautics research, and scientific discovery. While such advancements used to rely primarily on theoretical studies and physical experiments, high-end computational science today is a larger contributor to such achievements. In addition, computational modeling and simulation serves as a predictive tool that is not otherwise available. As a result, the use of high-performance computing is integral to the NASA’s work in all mission areas. However, the success of many NASA missions depends on solving complex computing challenges, some of which are intractable on traditional supercomputers. This is where quantum computing could play an unprecedented role by harnessing effects such as tunneling, superposition, and entanglement. This talk is an overview of how a spectrum of advanced computing capabilities is leveraged for NASA’s mission success.
Bio: Dr. Rupak Biswas is the Director of Exploration Technology at NASA Ames Research Center, Moffett Field, California, and has held this Senior Executive Service (SES) position since January 2016. He is in charge of planning, directing, and coordinating the technology development and operational activities of the organization that comprises of advanced supercomputing, human systems integration, intelligent adaptive systems, and entry systems technology. The directorate consists of over 950 employees with an annual budget of $231 million and includes two of NASA’s critical and consolidated infrastructures: arc jet testing facility and supercomputing facility. He received his B.S. in Physics and B.Tech. in Computer Science from the University of Calcutta, and the M.S. and Ph.D. in Computer Science from Rensselaer Polytechnic Institute. He was bestowed the Presidential Rank Award from U.S. President Biden in 2021 and other awards from NASA, including the Exceptional Achievement Medal and the Outstanding Leadership Medal (twice). He is an internationally recognized expert in high performance computing and has published more than 170 technical papers, received many Best Paper awards, edited several journal special issues, serves on the IEEE/ACM Supercomputing Conference Steering Committee, and given numerous lectures around the world.
Speaker: Kanad Ghose, Department of Computer Science, SUNY-Binghamton
Talk Title: Data Centers, Wearable Sensors and Secure Processors
Abstract: This talk is an overview of three recent projects from my group on systems development encompassing aspects of hardware and/or systems software. The first focuses on improving the energy efficiency of data centers by provisioning server capacity and cooling to match the instantaneous workload without any impact on the quality of service. With wearable sensors, an ECG sensor in this case, the emphasis is on reducing the power requirement via ECG signal encoding to simultaneously enhance the functions supported, specifically the detection of arrhythmia is real time. The final project is on developing a RISC-V hardware, compiler and Linux OS support for a secure processor that detects and prevents exploits rooted in software vulnerabilities. The solution extends tagging to instructions, going beyond the use of just tagging the data, resulting in the ability to enforce content-specific security behavior in the microarchitecture.
Bio: Dr. Kanad Ghose (PhD, Computer Science, Iowa State Univ., 1988; M.Tech. and B.Tech., Radiophysics and Electronics, Calcutta Univ. 1977 and 1980) is a SUNY Distinguished Professor of Computer Science at SUNY-Binghamton, where he had served as the Department Chair from 1998 to 2016. His research interests and publications are in all aspects of energy and power management, processor architectures and systems software. He also serves as the Site Director and founding co-director of a multi-university NSF Industry-University Cooperative Research Center on Energy-Smart Electronics Systems, now in its 11-the year of operations. His research work has received/receives support from Federal agencies (NSF, DARPA, AFOSR etc.) and Industry (IBM, Intel, Xilinx/AMD, Microsoft, SRC and many others). He was an Associate Editor for the IEEE Trans. on Computers in the past and currently co-leads the architecture/systems technical working group/area in the Heterogeneous Integration Roadmaps – one from the IEEE and another from the Semiconductor Research Council (SRC).
Speaker: Michela Meo, Politecnico di Torino, Italy
Talk Title: Aerial Platforms for the Sustainability of Radio Access Networks
Abstract: High Altitude Platforms (HAPS) equipped with Base Stations have been recently considered as a promising aerial network component to support Radio Access Networks (RANs). They are typically envisioned as a mean to extend radio coverage to remote areas or to provide connectivity in case of a disaster. In this work, we take a different perspective and consider HAPS as elements that can contribute to operate the radio access network so as to make it less energy demanding. When the demand of service is low, a part of the demand can be off-loaded to the HAPS so as to enable sleep modes of terrestrial nodes, resulting in energy consumption reduction of the RAN. In addition, when equipped with a Multi Access Edge Computing (MEC) server that provides caching capabilities, the HAPS brings additional capacity at no additional energy cost.
Bio: Dr. Michela Meo is a Professor of Telecommunication Engineering with the Politecnico di Torino, Italy. Her research interests include green networking, energy-efficient mobile networks and data centers, Internet traffic classification and characterization, and machine learning for video quality of experience. She edited a book Green Communications (Wiley) and several special issues of international journals. She chaired the International Advisory Council of the International Teletraffic Conference from 2015 to 2022. She is a Senior Editor of IEEE Transactions on Green Communications and was an Associate Editor of ACM/IEEE Transactions on Networking, Green Series of the IEEE Journal on Selected Areas of Communications, and IEEE Communication Surveys and Tutorials. In the role of General or Technical Chair, she has lead the organization of several conferences.
Speaker: Debdeep Mukhopadhyay, IIT Kharagpur, India
Talk Title: Security of Mobile and IoT Systems: A Hardware Security Perspective
Abstract: The world is becoming smaller and smaller due to the advancements in communications technology. Modern day mobile phones and Internet of Things (IoT) have led billions of devices to get connected with each other. While these advancements offer humanity tremendous potentials, the same power of control and command if misused can lead to catastrophic consequences. Despite several advancements in the world of crypto and trust architectures, there are often gaps that occur in the translation process from "theory to practice". In this talk, we take a quick peek to see some of the glaring vulnerabilities that can cripple such systems. We start our talk introducing side channel analysis, an attack vector which exhibits capabilities to recover secret information by monitoring unintended information channels, like power/electromagnetic radiations, timing required during computations of crypto-algorithms, behavior under faults, etc. Subsequently, we discuss how these attacks can be extended to mobile systems and IoT platforms, often exploiting features in the micro-architecture originally designed to improve performance. Finally, we conclude with a recent hack on an open-source Trusted Execution Environment (TEE) implementing the Arm TrustZone technology. OP-TEE has been ported to many Arm devices and platforms, including low-cost devices like Raspberry Pi which are often used for development of IoT systems. We show how a simple and reasonably low-cost fault attack can completely jeopardize the security of the system, emphasizing the importance of cryptographic engineering factoring micro-architectural leakages.
Bio: Dr. Debdeep Mukhopadhyay is currently an Institute Chair Professor in the CSE Department at IIT Kharagpur where he initiated Secured Embedded Architecture Laboratory (SEAL), focusing on Hardware-Security. He had worked as visiting scientist at NTU-Singapore, visiting Associate Professor of NYU-Shanghai, Assistant Professor at IIT Madras, Visiting Researcher at NYU-Tandon School of Engineering, USA. He holds a Ph.D, M.S., and a B.Tech. from IIT Kharagpur. His research interests are in Cryptographic Engineering and Hardware Security. Recently he is intrigued by adversarial attacks on machine-learning, and encrypted computations. Prof. Mukhopadhyay is the recipient of the prestigious Shanti Swarup Bhatnagar Award (2021) for Science and Technology, a Fellow of the Indian National Academy of Engineers, and Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA) for his contributions to micro-architectural security, cryptographic engineering, and Information Security. He is also a fellow of C3iHub (Cyber Security and Cyber Security for Cyber-Physical Systems) Innovation Hub of IIT Kanpur, and has been enlisted in Asia’s most outstanding researchers compiled by Asian Scientist Magazine (https://tinyurl.com/2vr8jaks). He was awarded Qualcomm Faculty Award (2022), Khosla National Award from IIT Roorkee (2021), DST Swarnajayanti Fellowship (2015-16), INSA Young Scientist award, INAE Young Engineer award, and Associateship for the Indian Academy of Sciences. He is a senior member of IEEE and ACM.
Speaker: Sumanta Pyne, National Institute of Technology, Rourkela, India
Talk Title: Computation-in-Memory: How Memristors are Reshaping Logic, Architecture, and Machine Learning
Abstract: In this talk, we will introduce memristor as a new circuit element and discuss how it facilitates non-volatile storage and in-memory computation. These elements have immense potential in redefining logic design and instruction-set architecture, especially when an automated digital-to-analog compiler is used in tandem that translates the required actions to memristor control signals. Memristor-crossbars can be efficiently used to implement Boolean functions as well as a set of assembly-level instructions integrated with code generation thus enabling processing of data within memory. Such capabilities would help prevent cache misses and unnecessary bus transfers leading to energy savings and performance enhancement. Additionally, memristors find wide applications to designing hardware accelerators for neural networks.
Bio: Dr. Sumanta Pyne received the B. Tech. degree from Maulana Abul Kalam Azad University of Technology (formerly known as West Bengal University of Technology), M. E. from the Indian Institute of Engineering Science and Technology, Shibpur, and the PhD degree from the Indian Institute of Technology Kharagpur, all in Computer Science and Engineering. Since 2015, he has been on the faculty of Computer Science and Engineering at the National Institute of Technology Rourkela, Odisha. His research interests include low-power VLSI design, computer architecture, embedded systems, compilers, and digital microfluidics.
Speaker: Dipankar Raychaudhuri, WINLAB, Rutgers University, USA
Talk Title: Low Latency Mobile Edge Cloud Services in Next-Generation Wireless Networks
Abstract: Next generation wireless access networks (5G and beyond) are being designed to support mobile edge cloud services with tight latency constraints. In this talk, we consider the design challenges of realizing low latency applications such as augmented/virtual reality (AR/VR) and cyber-physical systems (CPS), starting with an identification of bottlenecks associated with current technologies. An end-to-end approach for latency reduction is proposed via improvements to the radio access layer, the mobile core network architecture, and the edge cloud subsystem. Specific techniques for reducing latency are discussed, including redesign of the mobile core network with fast optical technology and elimination of centralized gateways, virtual network protocols with application aware routing, and real-time scheduling and orchestration of cloud tasks. The talk concludes with a brief overview of the COSMOS testbed currently being deployed in New York City for supporting experimental research on emerging edge-cloud enhanced wireless systems.
Bio: Dr. Dipankar Raychaudhuri is a Distinguished Professor in Electrical and Computer Engineering and Director of WINLAB (Wireless Information Network Lab) at Rutgers University. As WINLAB's Director, he is responsible for an internationally recognized industry-university research center specializing in wireless technology. He has served (or is serving) as principal investigator for several large U.S. National Science Foundation funded projects including the ORBIT wireless testbed, the MobilityFirst future Internet architecture, and the COSMOS city-scale platform for advanced wireless research. Dr. Raychaudhuri has previously held corporate R&D positions including: Chief Scientist, Iospan Wireless (2000-01), Assistant General Manager & Department Head, NEC Laboratories (1993-99), and Head, Broadband Communications, Sarnoff Corp (1990-92). He obtained the B.Tech. (Hons) degree from IIT Kharagpur in 1976 and the M.S. and Ph.D. degrees from SUNY, Stony Brook in 1978, 79. He is a Fellow of the IEEE and the recipient of several professional awards including the Rutgers School of Engineering Faculty of the Year Award (2017), IEEE Donald J. Fink Award (2014), IIT Kharagpur Distinguished Alumni Award (2012), and the Schwarzkopf Prize for Technological Innovation (2008).
Speaker: Dipanwita Roy Chowdhury, Indian Institute of Technology, Kharagpur
Talk Title: Energy-Efficient Blockchains for Cryptocurrency
Abstract: Cryptocurrencies are digital currencies that enable parties to exchange money without involving any trusted central authorities like the banks. Natively, they use some non-interactively verifiable distributed ledgers called blockchains, in order to log the transactions. As the name suggests, Blockchains are chains of blocks of data where the chains are nothing but hash pointers between the consecutive blocks. The parties, traditionally termed as miners, need to solve an instance of some fixed computational problem in order to write their transactions in the Blockchain. This computational problem is called proof-of-work (PoW). Quite a few cryptocurrencies including Bitcoin and Litecoin use one-way hash functions in their PoW. For example, the PoW of Bitcoin uses the SHA-256 hash function. Although SHA-256 provides 128-bit security, it demands a considerable amount of computational power, specifically in hardware. Recently, cellular automata (CA), is used in a new PoW, namely CArrency, in order to reduce the power requirement while maintaining 128-bit security. CArrency does it by reducing the slice count required in its implementation on FPGAs. Linear Hybrid Cellular Automata (LHCA) serve this purpose effectively as any n-cell LHCA can be accommodated in a small combinational circuit using O(n) slices in FPGAs. In this talk, I shall be introducing Cellular Automata as a primitive for energy-efficient PoW but still have comparable security.
Bio: Dr. Dipanwita Roy Chowdhury is a professor of the Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India. Her current research interests are in the field of Design and Analysis of Cryptographic Algorithms, VLSI Design and Secured Embedded Systems, Error Correcting Codes and Cellular Automata. She received her B.Tech and M.Tech degrees in Computer Science from University of Kolkata in 1987 and 1989 respectively, and the PhD degree from the Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur in 1994. She has published more than 160 technical papers in International Journals and Conferences. Dr. Roy Chowdhury is the recipient of INSA Young Scientist Award, Abdul Kalam Technology Innovation National Fellowship, Associate of Indian Academy of Science and is the fellow of Indian National Academy of Engineers.
Speaker: Sunita Sarawagi, Indian Institute of Technology, Bombay
Talk Title: Research Challenges of Embedding Machine Learning in Cyber-Physical Systems
Abstract: In this talk I will give an overview of two recent research results on using machine learning (ML) in an embedded mode in smart cyber-physical systems. First, we address the challenge of handling concept drift arising out of changes in the physical environment. We discuss a simple method that starts with a model with time-sensitive parameters but regularizes its temporal complexity using a Gradient Interpolation (GI) loss. Second, I will talk about the emerging area of algorithmic recourse. Traditional machine learning assumes that the input to the ML model is fixed, whereas in smart environments an agent may be willing to modify the environment settings to capture the input suitable for high-accuracy ML prediction. We present methods of training such a recourse module that proposes alternative environment settings in which to recapture the input for more reliable ML deployment.
Bio: Dr. Sunita Sarawagi is a professor of Computer Science and Engineering, and the head of the Center for Machine Intelligence and Data Science at IIT Bombay. Her research is in the fields of databases and machine learning. She received her Ph.D. degree in databases from the University of California at Berkeley and a B.Tech. degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999). She is an ACM Fellow, and was awarded the Infosys Prize (2019) for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She has several publications including notable paper awards at ACM SIGMOD, ICDM, and NeuRIPS conferences. She has served on the board of directors of ACM SIGKDD and VLDB foundation, and on the editorial boards of the ACM Transactions on Database Systems and ACM Transactions on Knowledge Discovery from Data. She was a program chair for the ACM SIGKDD 2008 conference, and research track co-chair for the VLDB 2011 conference.
Speaker: Elad Schiller, Chalmers University, Sweden
Talk Title: Far-more Dependable Distributed Systems
Abstract: Apollo Guidance Computer (AGC) was built under the design assumption that all software will eventually encounter failures, which are often transient. The challenge was addressed by a multi-level restart mechanism, i.e., a restart of a failed job or a group of failed jobs, or even a complete restart of the entire system. This restart mechanism was able to step in and save the day when approaching landing on the moon, while experiencing repeated failure occurrences even though these failures were unforeseen during design time. In this talk, we will discuss algorithmic tools for recovering after the occurrence of unforeseen failures. Specifically, we will focus on the design of more robust solutions using the dual design criteria of self-stabilization and Byzantine fault tolerance. This vigorous notion of fault tolerance can deal with the arbitrary behavior of up to t<n/3 nodes that are captured by a malicious adversary in a system that has n nodes. In addition, the system can automatically recover after the occurrence of arbitrary transient faults. These faults represent any violation of the assumptions according to which the system was designed to operate (provided that the algorithm code remains intact). As such, transient faults can leave the system in an arbitrary state from which the self-stabilizing algorithm should guarantee recovery within a finite time. In the remaining time of this talk, we will focus on recent developments related to SSBFT consensus algorithms.
Bio: Dr. Elad Michael Schiller is a professor of Computer Science and Engineering at Chalmers University of Technology. In 2006, he received his Ph.D. from the department of Computer Science at Ben-Gurion University of the Negev. His post-doctoral period was at MIT, Patras University, and the Technical University of Braunschweig. His research interests focus on the theory of fault-tolerant distributed systems and the application of these approaches to new systems. His publications in this area include more than 80 papers. He has participated in a number of national and international projects. He is frequently invited to serve as a member of the program committee of conferences in the area of distributed computing, such as PODC, ICDCS, and SSS. He participated in the organization of several international conferences in the area, such as ACM PODC, SSS, and ICDCN.
Speaker: Balaram Sinharoy, Systems Infrastructure Group, Google, USA
Talk Title: Next Era of Computing for Planetary-scale Applications
Abstract: In the last two decades, we have seen an exponential growth in hardware performance and capacity in every component of Systems Infrastructure, from Microprocessors and Accelerators to Storage and Networking. This has given rise to the planetary-scale services and connectivity that runs highly resilient and secure applications, some of which have over a billion users worldwide. With the slowdown of Moore’s law and the end of Dennard scaling, there is a profound impact on the growth of datacenter capability in a cost-efficient manner. In this talk, we will show that the current computing landscape requires new innovative thinking for the next era of computing. We will discuss several efforts in our group to improve system capability to match the exponential demand for Data, AI and Analytics for Planetary-scale Applications.
Bio: Dr. Balaram Sinharoy is currently a Lead Architect in Google’s Systems Infrastructure group, where he leads a team on defining next generation System Architecture for Data Centers. Prior to joining Google, Dr. Sinharoy was an IBM Fellow for 10 years and served as the Chief Architect of several generations of POWER microprocessors and systems. Dr. Sinharoy has published numerous conference and journal articles and has been granted over 200 patents in many areas of Computer Architecture. Dr. Sinharoy has been a keynote speaker in various IEEE conferences, such as Micro-42 and ARITH-25, among others and taught at Rensselaer Polytechnic Institute and University of North Texas. Dr. Sinharoy received his B.Tech. degree in Computer Science from the University of Calcutta and MS and PhD from Rensselaer Polytechnic Institute. He is an IBM Master Inventor and an IEEE Fellow (2009).
Speaker: Susmita Sur-Kolay, Indian Statistical Institute, Kolkata, India
Talk Title: Challenges in Designing Quantum Computing Systems
Abstract: More than half a century after quantum mechanics was accepted as a more accurate model of atomic physics, Richard Feynman proposed in 1981 the brilliant vision of building computers based on quantum mechanical systems to solve problems requiring enormous amounts of computation in classical machines. While significant speed-up of quantum algorithms over classical ones for certain problems have been proved in the ‘90s, notable progress in the technology of quantum computers has been recent. The focus of this talk is on the challenges of building quantum circuits for quantum algorithms taking fault tolerance and error-correction into account. The associated optimization problems with the tradeoff between quantum resources needed and the number of cycles of operations, which our group have been studying, are briefly presented. Finally, approaches to tackle present noisy intermediate scale quantum computers are discussed.
Bio: Dr. Susmita Sur-Kolay (Ph.D. Jadavpur University, B.Tech.(Hons.) in Electronics and Electrical Communications Engineering, Indian Institute of Technology Kharagpur) is presently a Senior Professor in the Advanced Computing and Microelectronics Unit of the Indian Statistical Institute, Kolkata. During 1993-99, she was a Reader in the Department of Computer Science and Engineering of Jadavpur University. Prior to that, she was a post-doctoral fellow at the University of Nebraska-Lincoln, and a Research Assistant at the Laboratory for Computer Science in Massachusetts Institute of Technology. She was also on sabbatical at Princeton University, Intel Corp, USA and University of Bremen, Germany. Her research contributions are in the areas of algorithms for design automation in emerging computing technologies, hardware security, synthesis of quantum circuits, and graph algorithms. She has published in leading international journals and peer-reviewed conference proceedings. She has served as General Co-Chair and Technical Program Co-Chair of the International Conference on VLSI Design and Symposium on VLSI Design and Test (2007), and on the program committees of several top international conferences. She has served on the editorial board of IEEE Transactions on VLSI Systems and ACM Transactions on Embedded Computing Systems. She was a Distinguished Visitor of IEEE Computer Society (India), a Senior Member of IEEE, Fellow of the Indian National Academy of Engineering, Member of the ACM, IET and VLSI Society of India. Among other awards, she was the recipient of the President of India Gold Medal (summa cum laude 1980) and Distinguished Alumnus Award (2020) of IIT Kharagpur, IBM Faculty Award (2009), VLSI Society of India Lifetime Woman Achiever Award (2022).
Speaker: P. P. Vaidyanathan, California Institute of Technology, USA
Talk Title: My Signal Processing Journey
Abstract: This talk provides a brief overview of the speaker's research journey over the last several decades, starting from filter bank theory all the way to sparse sensor arrays and Ramanujan sums in signal processing. The audience for this talk are eminent scientists in Computer Science but many not have familiarity with details of signal processing research. The goal is therefore to make the talk accessible to such elite audience in a different field. So the main ideas will be highlighted without detailed derivations or complicated proofs.
Bio: Dr. P. P. Vaidyanathan (Life Fellow, IEEE) received the B.Tech. and M.Tech. degrees from the Institute of Radiophysics and Electronics at University of Calcutta, India, in 1977 and 1979, respectively, and the Ph.D. degree in electrical and computer engineering from the University of California at Santa Barbara, in 1982. He is the Kiyo and Eiko Tomiyasu Professor of electrical engineering at the California Institute of Technology (Caltech). He was the recipient of the IEEE CAS Society Golden Jubilee Medal, the Terman Award of the ASEE, the IEEE Gustav Robert Kirchhoff Technical Field Award in 2016, and the IEEE Signal Processing Society’s Technical Achievement Award in 2002, Education Award in 2012, and Society Award in 2016. He received the EURASIP Athanasios Papoulis award in 2021 and is a Foreign Fellow of the Indian National Academy of Engineering. He was also the recipient of many awards for his research papers and for teaching at Caltech, including the Northrop Grumman teaching prize. He is a member of the U.S. National Academy of Engineering.