Meeting the Spectrum Demands for NextG Wireless Systems
Date: April 18, 2025, 10am - 11am EST
Abstract: The deployment of NextG wireless networks will continue to increase the demands for wireless spectrum. Several bands are currently being considered to meet these demands including the lower 3GHz band and the 7-8 GHz band, both of which are currently used by various federal incumbents. There are two main approaches to enable commercial use of such bands, either clearing spectrum by moving the incumbents out of a band or sharing spectrum with incumbent users. We will discuss the trade-offs between these two approaches and the challenges in determining the path forward. These trade-offs in turn depend on the economic impact of different approaches on the market for wireless services. We will discuss a framework for gaining insight into this based on game theoretic models for competition with congestible resources. We will utilize this framework to illustrate potential impacts of different approaches for providing spectrum to NextG services.
Biography: Randall Berry joined Northwestern University in 2000, where he is currently the Chair and John A. Dever Professor in the Department of Electrical and Computer Engineering. His research interests span topics in wireless communications, computer networking, network economics, and information theory. Dr. Berry received the M.S. and PhD degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1996 and 2000, respectively, where he was part of the Laboratory for Information and Decision Systems. Dr. Berry is the recipient of a 2003 NSF CAREER award and an IEEE Fellow. With his co-authors, he has received best paper awards at the IEEE Workshop on Smart Data Pricing in 2015 and 2017 and at the 2016 WiOpt conference. He has served as an Editor for the IEEE Transactions on Wireless Communications and the IEEE Transactions on Information Theory and is currently a division editor for the Journal of Communications and Networks and an Area editor for the IEEE Open Journal of the Communications Society.
Cosmic Backscatter and Other Ways to Communicate by Modulating Noise
Date: January 31, 2025, 1pm - 2pm EST
Abstract: Backscatter communication enables low power data transmission by shifting the burden of radio signal production from the energy-constrained endpoint device to a powered interrogator. Ambient backscatter demonstrated the possibility of using pre-existing, information carrying broadcast radio waves as a carrier, eliminating the need for a dedicated signal generation source. I will describe new communication methods that allows an endpoint device, with a very similar design to a backscatter endpoint, to communicate without the need for any RF carrier, whether deliberately generated or ambient. I will present several variants of this idea, including modulating Johnson noise in a resistor, and “Cosmic backscatter,” in which cosmic noise sources are modulated to encode information.
Biography: Joshua R. Smith is the Milton and Delia Zeutschel Professor in the Allen School of Computer Science and Engineering and the Department of Electrical and Computer Engineering at the University of Washington, Seattle, where he leads the Sensor Systems research group. He is a Fellow of the National Academy of Inventors and a Fellow of the IEEE. He leads the UW + Amazon Science Hub, a research center that facilities collaboration between Amazon and any unit of the University of Washington. He was named an Allen Distinguished Investigator by the Paul G. Allen Family Foundation and he was the thrust leader for Communications and Interface in the NSF Engineering Research Center (ERC) for Sensorimotor Neural Engineering. Before joining UW he was a Principal Engineer at Intel.
In recent years his research has focused on wirelessly powering and communicating with sensor systems in applications such implanted biomedical electronics, ubiquitous computing, robotics, and space. He is a co-founder of three startup companies that are commercializing research from his lab: Proprio, Wibotic, and Waveworks (formerly Jeeva Wireless). He received B.A. degrees in computer science and philosophy from Williams College, the M.A. degree in physics from Cambridge University, and Ph.D. and S.M. degrees from MIT.
From AIoT to Embodied AI – The Future of AI is Embedded
Date: November 22, 2024, 1pm - 2pm EST
Abstract: The rapid advancements in artificial intelligence are transforming embedded sensing systems, bridging the physical and digital worlds in unprecedented ways. The Artificial Intelligence of Things (AIoT) integrates intelligence into IoT devices, enabling smarter environments, while Embodied AI takes large language models and foundation models beyond the digital realm, embedding them into physical systems to create interactive and intelligent entities in the real world. This talk highlights projects from the Columbia Intelligent and Connected Systems Lab that showcase this convergence. In urban safety, we developed a mobile AIoT system using acoustic sensors and embedded machine learning to alert pedestrians and workers of dangers from nearby vehicles. This is followed by a generalized audio filtering architecture with broad applicability. In healthcare, we present a low-cost, vision-based AIoT system for fever screening that achieves superior accuracy at a fraction of commercial costs. In the wearable space, we introduce a glasses-based platform for biosignal acquisition and emotion recognition, as well as an AR-assisted intelligent stethoscope for self-health screening. In the intelligent environments space, we present a reconfigurable drone platform that supports on-demand task executions through natural language interfaces powered by large language models, scene understanding with vision language models, and a modular architecture for on-demand reconfiguration of sensors and actuators. A low-cost infrared light-based technique further enhances drone autonomy, enabling precision landing and short-distance NLOS indoor navigation. These projects illustrate how embedded intelligence is shaping the future of AI, redefining the relationship between humans, machines, and their environments.
Biography: Xiaofan (Fred) Jiang is an associate professor in the Electrical Engineering department at Columbia University and co-Chair of the Smart Cities Center at the Data Science Institute. Jiang received his PhD in Computer Science from UC Berkeley in 2010. His research lies at the intersection of systems and data, with a focus on intelligent embedded systems and their applications in mobile and wearable computing, intelligent built environments, the Internet of Things, cyber-physical human systems, and connected health. His research has been published in top-tier venues and has received numerous awards, including Best Paper Awards at MobiSys '24, BuildSys '24, ITEC '21, IPSN ’05 and Best Demo Awards at IPSN '23, SenSys '21, IPSN '20, IoTDI '18 and SenSys ’11. He has served on technical and organizing committees of leading conferences in the field, including TPC Chair of BuildSys ’14, TPC Chair of e-Energy '23, General Chair of SenSys ’19, General Chair of BuildSys '21, and General Chair of IPSN '23. Currently, he is the Vice Chair of ACM SIGEnergy, founding area editor of ACM Energy Informatics Review, TPC co-Chair of SenSys '25, and associate editor for ACM HEALTH, ACM TOSN, and ACM TIOT journals. His research has been featured in many popular media outlets, including The Economist, New York Post, Mashable, Gizmodo, The Telegraph, and Fast Company. He is the recipient of an NSF Graduate Fellowship, a Vodafone-US Foundation Fellowship, and an NSF CAREER Award.
Decentralized and Dispersed Computing for the Internet of Things
Date: November 15, 2024, 1pm - 2pm EST
Abstract: This talk touches on two different paradigms: decentralized computing, and dispersed computing. The former pertains to applications that can operate across trust boundaries, while the latter is focused on enabling complex applications to be deployed on heterogeneous networked edge and cloud computing resources.
I will present results from research at USC touching on these paradigms in the context of the internet of things. Our work on decentralized computing includes a new mobile-oriented blockchain protocol, smart contracts to enable cheat-proof peer-to-peer trading of digital goods, and a decentralized review mechanism suitable for data marketplaces. Our work on dispersed computing includes an open-source orchestrator, a fast scheduler that uses graph convolutional networks, and a novel adversarial analysis of scheduling algorithms.
Biography: Bhaskar Krishnamachari is a Professor of Electrical and Computer Engineering at the USC Viterbi School of Engineering. His research spans the design and evaluation of algorithms and protocols for wireless networks, distributed systems, and the internet of things. He is the co-author of more than 300 technical papers, and 3 books, that have been collectively cited more than 34,000 times. He has co-authored papers that have received awards at ACM/IEEE IPSN, ACM Mobicom, and IEEE VNC. He is an IEEE Fellow.
Advancing Mobile Health with Large Models
Date: October 25, 2024, 1pm - 2pm EST
Abstract: Wearable devices and mobile health technologies are becoming increasingly vital for personal health, offering new possibilities for continuous sensing and timely interventions. However, the potential for these devices extends far beyond traditional classification and regression tasks like activity recognition and heart rate monitoring. In this talk, I will explore how we build and evaluate large-scale models that interact with wearable data. First, I will discuss how we scale our wearable foundation models with 40 million hours of multimodal sensor data and whether scaling laws apply to wearable signals. Next, I will delve into our efforts to enable large language models and agents to interpret and transform wearable data into actionable health insights. I will highlight how these research advancements are paving the way toward a new era of mobile health AI.
Biography: Xin Liu is a Research Scientist at Google Consumer Health Research, focusing on the intersection of machine learning, ubiquitous computing, and health. He has authored over 30 peer-reviewed papers in top venues across machine learning (NeurIPS, ICLR), mobile and ubiquitous computing (CHI, Ubicomp), and biomedical engineering (TBME, JBHI). His work has been recognized with the Google PhD Fellowship and the Best Paper Award at Ubicomp 2023. His research has also been featured in 4x oral presentations at NeurIPS, ICLR, and WACV and covered by media outlets such as IEEE Spectrum, ACM TechNews, GeekWire, ZDNet, and UW News. His PhD research in camera-based contactless health sensing has been widely adopted by Google and startup industries. He received his PhD in Computer Science from the University of Washington Seattle in 2023 and bachelor’s with highest honors from the University of Massachusetts Amherst in 2018.
Deep Learning-enabled Computational Microscopy and Diffractive Imaging
Date: October 4, 2024, 1pm - 2pm EST
Abstract: In this presentation, I will provide an overview of our recent work on using deep neural networks in advancing computational microscopy and sensing systems, also covering their biomedical applications, including virtual staining of label-free tissue for pathology. I will also discuss diffractive optical networks designed by deep learning to all-optically implement various complex functions as the input light diffracts through spatially-engineered surfaces. These diffractive processors designed by deep learning have various applications, e.g., all-optical image analysis, feature detection, object classification, computational imaging and seeing through diffusers, also enabling task-specific camera designs and new optical components such as spatial, spectral and temporal beam shaping and spatially-controlled wavelength division multiplexing. These deep learning-designed diffractive systems can broadly impact (1) all-optical statistical inference engines, (2) computational camera and microscope designs and (3) inverse design of optical systems that are task-specific. In this talk, I will give examples of each group, enabling transformative capabilities for various applications of interest in e.g., autonomous systems, defense/security, telecommunications as well as biomedical imaging and sensing.
Biography: Dr. Aydogan Ozcan is the Chancellor’s Professor and the Volgenau Chair for Engineering Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute. He is also the Associate Director of the California NanoSystems Institute. Dr. Ozcan is elected Fellow of the National Academy of Inventors (NAI) and holds more than 80 issued/granted patents in microscopy, holography, computational imaging, sensing, mobile diagnostics, nonlinear optics and fiber-optics, and is also the author of one book and the co-author of more than 1000 peer-reviewed publications in leading scientific journals/conferences. Dr. Ozcan received major awards, including the Presidential Early Career Award for Scientists and Engineers (PECASE), International Commission for Optics ICO Prize, Dennis Gabor Award (SPIE), Joseph Fraunhofer Award & Robert M. Burley Prize (Optica), SPIE Biophotonics Technology Innovator Award, Rahmi Koc Science Medal, SPIE Early Career Achievement Award, Army Young Investigator Award, NSF CAREER Award, NIH Director’s New Innovator Award, Navy Young Investigator Award, IEEE Photonics Society Young Investigator Award and Distinguished Lecturer Award, National Geographic Emerging Explorer Award, National Academy of Engineering The Grainger Foundation Frontiers of Engineering Award and MIT’s TR35 Award for his seminal contributions to computational imaging, sensing and diagnostics. Dr. Ozcan is elected Fellow of Optica, AAAS, SPIE, IEEE, AIMBE, RSC, APS and the Guggenheim Foundation, and is a Lifetime Fellow Member of Optica, NAI, AAAS, APS and SPIE. Dr. Ozcan is also listed as a Highly Cited Researcher by Web of Science, Clarivate.
Using Modern ML Tools to Enhance Internet Measurement and Data Processing: Three Case Studies
Date: April 12, 2024, 1pm - 2pm EST
Abstract: Internet measurement via IP address and port scans is the backbone of cybersecurity research, including the detection of vulnerabilities and large-scale cyber risk analysis. In this talk, I will present three case studies on applying modern machine learning methods to enhance the acquisition and processing of Internet measurement data. The first is a framework that constructs low-dimensional numerical fingerprints (embeddings) of discoverable hosts through the use of variational autoencoders (VAEs) on scan data. These embeddings can be used for visualizing the distribution of hosts in a particular collection/network, and for various downstream (supervised) learning tasks such as detection and prediction of malicious hosts. The second study shows how large language models (LLMs) can be used to generate text-based host fingerprints directly from raw texts contained in scan data. Compared to existing approach (manually curated fingerprints), we show that our approach can identify new IoT devices and server products that were not previously captured. The last study demonstrates that the scanning methodology itself can be made substantially more efficient (covering 99% of the active hosts at a probing rate of 14.2% compared to a standard, exhaustive scan) by learning cross-protocol correlation and using it to determine in real time what ports to scan and in what sequence.
Biography: Mingyan Liu is the Alice L. Hunt Collegiate Professor of Engineering, a professor of Electrical Engineering & Computer Science, and the Associate Dean for Academic Affairs of the College of Engineering at the University of Michigan, Ann Arbor. She received her Ph.D. Degree in electrical engineering from the University of Maryland, College Park, in 2000 and has been with UM ever since. From Sept 2018 to May 2023, she was the Peter and Evelyn Fuss Chair of Electrical and Computer Engineering. Her research interests are in optimal resource allocation, performance modeling, sequential decision and learning theory, game theory and incentive mechanisms, with applications to large-scale networked systems, cybersecurity and cyber risk quantification. She is a Fellow of the IEEE and a member of the ACM.
Robust and Private Learning using Hardware-Software Co-Design in Centralized and Federated Settings
Date: March 29, 2024, 1pm - 2pm EST
Abstract: Models with deep architectures are enabling a significant paradigm shift in a diverse range of fields, including natural language processing and computer vision, as well as the design and automation of complex integrated circuits. While the deep models – and optimizations based on them, e.g., Deep Reinforcement Learning (RL) – demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability of DL to various attacks. On the other hand, the susceptibility of DL to potential attacks might thwart trustworthy technology transfer as well as reliable deployment. In this talk, we discuss end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. We also propose novel solutions that include both robustness and privacy criteria. Our comprehensive analyses reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software only counterparts and show how they can achieve very efficient and latency-optimized defenses for real world applications.
Biography: Farinaz Koushanfar is the Henry Booker Professor of Electrical and Computer Engineering (ECE) at the University of California San Diego (UCSD), where she is the founding co-director of the UCSD Center for Machine-Intelligence, Computing & Security (MICS). She is also a research scientist at Chainlink Labs. Her research addresses several aspects of secure and efficient computing, with a focus on robust machine learning under resource constraints, AI-based optimization, hardware and system security, intellectual property (IP) protection, as well as privacy-preserving computing. Dr. Koushanfar is a fellow of the Kavli Frontiers of the National Academy of Sciences, and a fellow of IEEE / ACM. She has received a number of awards and honors including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, MIT Technology Review TR-35, Qualcomm Innovation Awards, Intel Collaborative Awards, Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO, as well as several best paper awards.
Towards AIoT: Building Intelligence into Sensing, Networking, and Data Analytics of IoT
Date: March 22, 2024, 11am - 12pm EST
Abstract: AIoT provides opportunities to transcending state-of-the-art technologies in both AI and IoT. On one hand the unprecedented data scale and prevalence from IoT magnifies the AI power; on the other hand the machine intelligence from AI helps excel every aspect of sensing, computing, and communication in nowadays IoT. This talk will introduce our recent research efforts into devising viable AIoT solutions that seek to address fundamental challenges due to (i) massiveness of devices, where hundreds of billions of networked sensors in the physical world may exhaust limited computing and communication resources, (ii) sensing intrusion to people and their things, where improper IoT instrumentation may impairs the harmonious co-existence of people and the machine intelligence, and (iii) plethora of data, where unprecedented scale and prevalence of the IoT data not only contributes to training powerful AI but also sets obstacles for distilling, verifying, adapting, and transferring the machine learning processes across people and different cyber or physically engineered systems.
Biography: Dr. Mo Li is a Professor from Hong Kong University of Science and Technology. His research focuses on system aspects of wireless sensing and networking, IoT for smart cities and urban informatics. Dr. Li has been on the editorial board of IEEE/ACM Transactions on Networking, IEEE Transactions on Mobile Computing, ACM Transactions on Internet of Things, and IEEE Transactions on Wireless Communications, all leading journals in the field. Dr. Li served the technical program committee member for top conferences in computer system and networking, including ACM MobiCom, ACM MobiSys, ACM SenSys, and many others. Dr. Li is a Distinguished Member of the ACM since 2019, and a Fellow of the IEEE since 2020.
Multimodal Machine Intelligence and its Human-centered Possibilities
Date: February 23, 2024, 1pm - 2pm EST
Abstract: Converging developments across the machine intelligence ecosystem––from multimodal sensing and signal processing to computing––are enabling new human-centered possibilities both in advancing science and in the creation of technologies for societal applications including in human health and wellbeing. This includes approaches for quantitatively and objectively understanding human communicative, affective and social behavior with applications in diagnostics and treatment across varied domains such as distressed relationships, depression, suicide, autism spectrum disorder, addiction to workplace health and wellbeing. The talk will also discuss the challenges and opportunities for creating trustworthy machine intelligence approaches that are inclusive, equitable, robust, safe, and secure e.g., with respect to protected variables such as gender/race/age/ability.
Biography: Shrikanth (Shri) Narayanan is University Professor, Niki & C. L. Max Nikias Chair in Engineering and VP for Presidential Initiatives at the University of Southern California (USC), where he is Professor of Electrical & Computer Engineering, Computer Science, Linguistics, Psychology, Neuroscience, Pediatrics, and Otolaryngology—Head & Neck Surgery, Director of the Ming Hsieh Institute and Research Director of the Information Sciences Institute. Prior to USC, he was with AT&T Bell Labs and AT&T Research. He is a Visiting Faculty Researcher with Google Research. His interdisciplinary research focuses on human-centered sensing/imaging, signal processing, and machine intelligence centered on human communication, interaction, emotions, and behavior. He is a Fellow of the Acoustical Society of America, IEEE, ACM, International Speech Communication Association (ISCA), the American Association for the Advancement of Science, the Association for Psychological Science, the Association for the Advancement of Affective Computing, the American Institute for Medical and Biological Engineering, and the National Academy of Inventors. He is a Guggenheim Fellow and member of the European Academy of Sciences and Arts, and a recipient of many awards for research and education including the 2024 Edward J. McCluskey Technical Achievement Award from the IEEE Computer Society, the 2023 Claude Shannon-Harry Nyquist Technical Achievement Award from the IEEE Signal Processing Society, 2023 ISCA Medal for Scientific Achievement, and the 2023 Richard Deswarte Prize in Digital History. He has published widely and his inventions have led to technology commercialization including through startups he co-founded: Behavioral Signals Technologies focused on AI based conversational assistance and Lyssn focused on mental health care and quality assurance.
Talk slides can be found here.
Intelligent Edge Services and Foundation Models for Internet of Things Applications
Date: February 9, 2024, 1pm - 2pm EST
Abstract: Advances in neural networks revolutionized modern machine intelligence, but important challenges remain when applying these solutions in IoT contexts; specifically, on lower-end embedded devices with multimodal sensors and distributed heterogeneous hardware. The talk discusses challenges in offering machine intelligence services to support applications in resource constrained distributed IoT environments. The intersection of IoT applications, real-time requirements, distribution challenges, and AI capabilities motivates several important research directions. For example, how to support efficient execution of machine learning components on embedded edge devices while retaining inference quality? How to reduce the need for expensive manual labeling of IoT application data? How to improve the responsiveness of AI components to critical real-time stimuli in their physical environment? How to prioritize and schedule the execution of intelligent data processing workflows on edge-device GPUs? How to exploit data transformations that lead to sparser representations of external physical phenomena to attain more efficient learning and inference? How to develop foundation models for IoT that offer extended inference capabilities from time-series data analogous to ChatGPT inference capabilities from text? The talk discusses recent advances in edge AI and foundation models and presents evaluation results in the context of different real-time IoT applications.
Biography: Tarek Abdelzaher received his Ph.D. in Computer Science from the University of Michigan in 1999. He is currently a Sohaib and Sara Abbasi Professor and Willett Faculty Scholar at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 300 refereed publications in real-time computing, distributed systems, sensor networks, and control. He served as an Editor-in-Chief of the Journal of Real-Time Systems, and has served as Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, IEEE Embedded Systems Letters, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal, among others. Abdelzaher's research interests lie broadly in understanding and influencing performance and temporal properties of networked embedded, social and software systems in the face of increasing complexity, distribution, and degree of interaction with an external physical and social environment. Tarek Abdelzaher is a recipient of the IEEE Outstanding Technical Achievement and Leadership Award in Real-time Systems (2012), the Xerox Award for Faculty Research (2011), as well as several best paper awards. He is a fellow of IEEE and ACM.
Federated Learning: New Approaches in Learning and Applications to Lifelong Learning
Date: November 17, 2023, 1pm - 2pm EST
Abstract: New demands in data management are emerging nowadays. Some of these constraints are driven by the need for privacy compliance of the personal data and some of them by the need to train bigger, better, faster models. As such, increasingly more data is stored behind inaccessible firewalls or on users’ devices without the option of sharing for centralized training. To this end, the Federated Learning (FL) paradigm has been proposed, addressing the privacy concerns, while still processing such inaccessible data in a continual manner. However, FL doesn’t come as a free lunch and new technical challenges have emerged. Herein, this talk will present some new ways of addressing such challenges while federating heterogeneous models, dealing with the dynamic nature of learning.
Biography: Dr. D. Dimitriadis is Principal Applied Scientist in Amazon, where he is currently the technical lead for Federated Learning and Analytics across the company, Continual and Semi-supervised Learning. He has worked in Microsoft Research, IBM Research and AT&T Labs (2009-14), lecturer P.D 407/80 at the School of ECE, NTUA, Greece. He is a Senior Member of IEEE and has served as a chair in several conferences and workshops. He has published more than 100 papers in peer-reviewed scientific journals and conferences with 3000 citations. He received his PhD degree from NTUA in February 2005. His PhD Thesis Title is “Non-Linear Speech Processing, Modulation Models and Applications to Speech Recognition”.
The COSMOS Testbed – a Platform for Advanced Wireless, Edge Cloud, Optical, Smart Streetscapes, and International Experimentation
Date: October 27, 2023, 1pm - 2pm EST
Abstract: This talk will provide an overview of the COSMOS testbed, that is being deployed as part of the NSF Platforms for Advanced Wireless Research (PAWR) program, and briefly review various ongoing experiments in the areas of wireless, optical, edge cloud, and smart cities. COSMOS (Cloud-Enhanced Open Software-Defined Mobile-Wireless Testbed for City-Scale Deployment) is being deployed in West Harlem (New York City). It targets the technology "sweet spot" of ultra-high bandwidth and ultra-low latency, a capability that will enable a broad new class of applications including augmented/virtual reality and cloud-based autonomous vehicles. Realization of such high bandwidth/low latency wireless applications involves research not only on radio links, but also on the system as a whole including algorithmic aspects related to spectrum use, networking, and edge computing. We will present an overview of COSMOS' key enabling technologies, which include mmWave radios, software-defined radios, optical/SDN x-haul network, and edge cloud. We will then discuss the deployment and outreach efforts as well as the international component (COSMOS Interconnecting Continents - COSM-IC). Finally, we will describe various experiments that have been conducted in the testbed, including in the areas of edge cloud, mmWave wireless, full-duplex wireless, smart streetspaces, and optical communications/sensing. The COSMOS testbed design and deployment is joint work with the COSMOS team.
Biography: Gil Zussman received the Ph.D. degree in Electrical Engineering from the Technion in 2004. Between 2004 and 2007 he was a Postdoctoral Associate at MIT. Since 2007 he has been with Columbia University where he is a Professor of Electrical Engineering and Computer Science (affiliated faculty), and member of Data Science Institute. His research interests are in the area of networking, and in particular in the areas of wireless, mobile, and resilient networks. He has been an associate editor of IEEE/ACM Trans. on Networking, IEEE Trans. on Control of Network Systems, IEEE Trans. on Wireless Communications and the TPC Chair of IEEE INFOCOM’23 and ACM MobiHoc’15. Gil is an IEEE Fellow and received two Marie Curie fellowships, the Fulbright Fellowship, the DTRA Young Investigator Award, and the NSF CAREER Award. He is a co-recipient of seven best paper awards, including the ACM SIGMETRICS’06 Best Paper Award, the 2011 IEEE Communications Society Award for Advances in Communication, and the ACM CoNEXT’16 Best Paper Award.
Vehicle Computing: Vision and Challenges
Date: October 20, 2023, 1pm - 2pm EST
Abstract: Vehicles have been majorly used for transportation in the last century. With the proliferation of onboard computing and communication capabilities, we envision that future connected vehicles (CVs) will serve as a mobile computing platform in addition to their conventional transportation role for the next century. In this article, we present the vision of Vehicle Computing, i.e., CVs are the perfect computation platforms, and connected devices/things with limited computation capacities can rely on surrounding CVs to perform complex computational tasks. We also discuss Vehicle Computing from several aspects, including key and enabling technologies, case studies, open challenges, and the potential business model.
Biography: Weisong Shi is a Professor and Chair of the Department of Computer and Information Sciences at the University of Delaware (UD). He leads the Connected and Autonomous Research (CAR) Laboratory. Dr. Shi is the Center Director of a recently funded NSF eCAT Industry-University Cooperative Research Center (IUCRC) (2023-2028), focusing on Electric, Connected, and Autonomous Technology for Mobility. He is an internationally renowned expert in edge computing, autonomous driving, and connected health. His pioneer paper, "Edge Computing: Vision and Challenges,” has been cited more than 6000 times. Before joining UD, he was a professor at Wayne State University (2002-2022). He served in multiple administrative roles, including Associate Dean for Research and Graduate Studies at the College of Engineering and Interim Chair of the Computer Science Department. Dr. Shi also served as a National Science Foundation (NSF) program director (2013-2015). He was the chair of the IEEE Computer Society Special Technology Community on Autonomous Driving Technologies (ADT) and the Strategic Planning Committee member of the Autoware Foundation. He is a Fellow of IEEE and a member of NSF CISE Advisory Committee. More information can be found at http://weisongshi.org.
Talk Video and Slides:
Introducing Project Aria: A New Tool for Egocentric Multi-Modal AI Research
Date: October 6, 2023, 10am - 11am EST
Abstract: Egocentric, multi-modal data as available on future augmented reality (AR) devices provides
unique challenges and opportunities for machine perception. These future devices will need to be all-day
wearable in a socially acceptable form-factor to support always available, context-aware and personalized
AI applications. Our team at Meta Reality Labs Research built the Aria device, an egocentric, multi-modal
data recording and streaming device with the goal to foster and accelerate research in this area. In this
talk, we will introduce the Aria device hardware including its sensor configuration and the corresponding
software tools that enable recording and processing of such data. We will show live demos of research
applications that we can enable with this device platform.
Biography: Kiran Somasundaram is a Systems Architect at Meta Reality Labs developing machine perception technologies to enable all-day wearable AR smart glasses. He received his Ph. D., in Electrical and Computer Engineering, from the University of Maryland, College Park, in 2010. Prior to joining Meta, Kiran worked at Qualcomm Research on projects across robotics and mobile AR technologies.
Training Machine Learning Models with Private Data
Date: September 22, 2023, 1pm - 2pm EST
Abstract: Privacy and security-related concerns are growing as machine learning reaches diverse application domains. The data holders want to train or infer with private data while exploiting accelerators, such as GPUs, that are hosted in the cloud. Cloud systems are vulnerable to attackers that compromise the privacy of data and integrity of computations. Tackling such a challenge efficiently requires exploiting hardware security capabilities to reduce the cost of theoretical privacy algorithms. This talk will describe my group’s experiences in building privacy preserving machine learning systems. I will present DarKnight, a framework for large DNN training while protecting input privacy and computation integrity. DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance. The second part of the talk will focus on an orthogonal approach to privacy using multi-party computing (MPC). We present detailed characterisation of MPC overheads when executing large language models in a distributed manner. We then present MPCpipe a pipelined MPC execution model that overlaps computation and communication in MPC.
Biography: Murali Annavaram is the Lloyd Hunt Chair Professor in the Ming-Hsieh Department of Electrical and Computer Engineering and in the Thomas Lord department of Computer Science (joint appointment) at the University of Southern California. He was the Rukmini Gopalakrishnachar Chair Professor at the Indian Institute of Science. He is the founding director of the REAL@USC-Meta center that is focused on research and education in AI and learning. His research group tackles a wide range of computer system design challenges, relating to energy efficiency, security and privacy. He has been inducted to the hall of fame for three of the prestigious computer architecture conferences ISCA, MICRO and HPCA. He served as a Technical Program Chair for HPCA 2021, and served as the General Co-Chair for ISCA 2018. Prior to his appointment at USC he worked at Intel Microprocessor Research Labs from 2001 to 2007. His work at Intel lead to the first 3D microarchitecture paper, and also influenced Intel’s TurboBoost technology. In 2007 he was a visiting researcher at the Nokia Research Center working on mobile phone-based wireless traffic sensing using virtual trip lines, which later become Nokia Traffic Works product. In 2020 he was a visiting faculty scientist at Facebook, where he designed the checkpoint systems for distributed training. Murali co-authored Parallel Computer Organization and Design, a widely used textbook to teach both the basic and advanced principles of computer architecture. Murali received the Ph.D. degree in Computer Engineering from the University of Michigan, Ann Arbor, in 2001. He is a Fellow of IEEE and Senior Member of ACM.
Direct Air-Water Communication and Sensing with Light
Date: April 28, 2023, 11am - 12pm EST
Abstract: The ability to communicate and sense across the air-water boundary is essential for efficient exploration and monitoring of the underwater world. Existing wireless solutions for communication and sensing typically focus on a single physical medium and fall short in achieving high-bandwidth communication and accurate sensing across the air-water interface without any relays on the water surface.
In this talk, I will describe our effort in exploiting the use of laser light to enable effective communication and sensing in the air-water context. I will present our AmphiLight framework that allows bidirectional, Mbps-level communication across the air-water interface. I will focus on our design elements that overcome full-hemisphere laser steering with portable hardware, handle strong ambient light outdoors, and adapt to dynamics of water waves. I will then introduce our recent work Sunflower, which allows an aerial drone to directly locate underwater robots at centimeter-level accuracy, without any relays on the water surface. I will conclude with future work and open challenges.
Biography: Xia Zhou is an Associate Professor in the Department of Computer Science at Columbia University, where she directs the Mobile X laboratory. Before joining Columbia in 2022, she was a faculty member in the Department of Computer Science at Dartmouth College. Her research interests lie broadly in mobile computing with recent focuses on light based communication and sensing, mobile sensing, and human-computer interactions. She is a recipient of the Presidential Early Career Award (PECASE) in 2019, SIGMOBILE RockStar Award in 2019, the Karen E. Wetterhahn Memorial Award for Distinguished Creative and Scholarly Achievement in 2018 and named as N2Women: Rising Stars in Networking and Communication in 2017. She has also won the Sloan Research Fellowship in 2017, NSF CAREER Award in 2016, and Google Faculty Research Award in 2014. She received her PhD at UC Santa Barbara in 2013 and MS at Peking University in 2007.
Practical Federated Learning: From Research to Production
Date: April 20, 2023, 11am - 12pm EST
Abstract: Although cloud computing has successfully accommodated the three "V"s of Big Data, collecting everything into the cloud is becoming increasingly infeasible. Today, we face a new set of challenges. A growing awareness of privacy among individual users and governing bodies is forcing platform providers to restrict the variety of data we can collect. Often, we cannot transfer data to the cloud at the velocity of its generation. Many cloud users suffer from sticker shock, buyer's remorse, or both as they try to keep up with the volume of data they must process. Making sense of data in the device where it is generated is more appealing than ever.
Although theoretical federated learning research grew exponentially over the past few years, the community had been far from putting those theories into practice. In this talk, I will introduce FedScale, a scalable and extensible open-source federated learning and analytics platform that we started with a systems perspective to make federated learning practical. It provides high-level APIs to implement algorithms, a modular design to customize implementations for diverse hardware and software backends, and the ease of deploying the same code at many scales. FedScale also includes a comprehensive benchmark that allows machine learning users to evaluate their ideas in realistic, large-scale settings. I will also highlight how LinkedIn is integrating FedScale in their machine learning workflows that affect 800+ million users and share some lessons we learned when migrating academic research to production.
Biography: Mosharaf Chowdhury is an Associate Professor of CSE at the University of Michigan, Ann Arbor, where he leads the SymbioticLab. His research improves application performance and system efficiency of machine learning and big data workloads with a recent focus on optimizing energy consumption and data privacy. His group developed Infiniswap, the first scalable memory disaggregation solution; Salus, the first software-only GPU sharing system for deep learning; FedScale, a scalable federated learning and analytics platform; and Zeus, the first GPU energy-vs-training performance optimizer for DNN training. In the past, Mosharaf invented coflows and was one of the original creators of Apache Spark. He has received many individual awards, fellowships, and seven paper awards from top venues, including NSDI, OSDI, and ATC. Mosharaf received his Ph.D. from the AMPLab at UC Berkeley in 2015.
FarmVibes: Democratizing Digital Tools for Sustainable Agriculture
Date: April 14, 2023, 2pm - 3pm EST
Abstract: Agriculture is one of the biggest contributors to climate change. Agriculture and land use degradation, including deforestation, account for about a quarter of the global GHG emissions. Agriculture consumes about 70% of the world’s freshwater resources. Agriculture is also amongst the most impacted by climate change. Farmers depend on predictable whether for their farm management practices, and unexpected weather events, e.g., high heat, floods, etc. leaves them unprepared. Agriculture could also be a potential solution to the climate problem. If farmers use the right agricultural practices, then it can help remove carbon from the atmosphere. However, making progress on any of the above challenges is difficult due to the lack of data from the farms. Existing approaches for estimating emissions or sequestered carbon are very expensive. Through this project, our goal is to build affordable digital technologies to help farmers (1) estimate the amount of emissions in their farm, (2) with climate adaptation by predicting weather variations, and (3) determine the right management practices that can be profitable, and also help sequester carbon.
Biography: Ranveer Chandra is the Managing Director for Research for Industry, and the CTO of Agri-Food at Microsoft. He also leads the Networking Research Group at Microsoft Research, Redmond. Previously, Ranveer was the Chief Scientist of Microsoft Azure Global. His research has shipped as part of multiple Microsoft products, including VirtualWiFi in Windows 7 onwards, low power Wi-Fi in Windows 8, Energy Profiler in Visual Studio, Software Defined Batteries in Windows 10, and the Wireless Controller Protocol in XBOX One. His research also led to a new product. Ranveer is active in the networking and systems research community, and has served as the Program Committee Chair of IEEE DySPAN 2012, ACM MobiCom 2013, and ACM HotNets 2022.
Ranveer started Project FarmBeats at Microsoft in 2015. He also led the battery research project, and the white space networking project at Microsoft Research. He was invited to the USDA to present his research to the US Secretary of Agriculture, and this work was featured by Bill Gates in GatesNotes, and was selected by Satya Nadella as one of 10 projects that inspired him in 2017. Ranveer has also been invited to the FCC to present his work on TV white spaces, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the world’s first urban white space network. As part of his doctoral dissertation, Ranveer developed VirtualWiFi. The software has over a million downloads and was among the top 5 downloaded software released by Microsoft Research. It is shipping as a feature in Windows since 2009.
Ranveer has published more than 100 papers, and holds over 150 patents granted by the USPTO. His research has been cited by the popular press, such as the Economist, MIT Technology Review, BBC, Scientific American, New York Times, WSJ, among others. He is a Fellow of the ACM and the IEEE, and has won several awards, including award papers at ACM CoNext 2008, ACM SIGCOMM 2009, IEEE RTSS 2014, USENIX ATC 2015, Runtime Verification 2016 (RV’16), ACM COMPASS 2019, and ACM MobiCom 2019, the Microsoft Research Graduate Fellowship, the Microsoft Gold Star Award, the MIT Technology Review’s Top Innovators Under 35, TR35 (2010) and Fellow in Communications, World Technology Network (2012). He was recently recognized by the Newsweek magazine as America’s 50 most Disruptive Innovators (2021). Ranveer has an undergraduate degree from IIT Kharagpur, India and a PhD from Cornell University.
Machine Learning and the Data Center: A Dangerous Dead End
Date: March 24, 2023
Abstract: The vast majority of machine learning (ML) occurs today in a data center. But there is a very real possibility that in the (near?) future, we will view this situation similarly to how we now view lead paint, fossil fuels and asbestos: a technological means to an end, that was used for a time because, at that stage, we did not have viable alternatives – and we did not fully appreciate the negative externalities that were being caused. Awareness of the unwanted side effects of the current ML data center centric paradigm is building. It couples to ML an alarming carbon footprint, a reliance to biased close-world datasets, serious risks to user privacy – and promotes centralized control by large organizations due to the assumed extreme compute resources. In this talk, I will offer a sketch of preliminary thoughts regarding how a data center free future for ML might come about, and also describe how some of our recent research results and system solutions (including the Flower framework – http://flower.dev) might offer a foundation along this path.
Biography: Nic Lane (http://niclane.org) is a full Professor in the department of Computer Science and Technology, and a Fellow of St. John’s College, at the University of Cambridge. He also leads the Cambridge Machine Learning Systems Lab (CaMLSys – http://http://mlsys.cst.cam.ac.uk/). Alongside his academic appointments, Nic is the co-founder and Chief Science Officer of Flower Labs (https://flower.dev/), a venture-backed AI company (YCW23) behind the Flower framework. Nic has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for pioneering research, performed during his PhD thesis, that devised machine learning algorithms used today on devices like smartphones. Nic was the 2020 ACM SIGMOBILE Rockstar award winner for his contributions to “the understanding of how resource-constrained mobile devices can robustly understand, reason and react to complex user behaviors and environments through new paradigms in learning algorithms and system design.”
Efficiently Enabling Rich and Trustworthy Inferences at the Extreme Edge
Date: February 8, 2023
Abstract: Computing systems intelligently performing perception-cognition-action (PCA) loops are essential to interfacing our digitized society with the analog world it is embedded in. They employ distributed edge-cloud computing hierarchies and deep learning methods to make sophisticated inferences and decisions from high-dimensional unstructured sensory data in our personal, social, and physical spaces. While the adoption of deep learning has resulted in considerable advances in accuracy and richness, they have also resulted in challenges such as generalizing to novel situations, assuring robustness in the face of uncertainty, reasoning about complex spatiotemporal events, implementing in ultra resource-constrained edge devices, and engendering trust in opaque modes. This talk presents ideas for addressing these challenges with neuro-symbolic and physics-aware models, automatic platform-aware architecture search, and sharing of edge resources, and describes our experience in applying them in varied application domains such as mobile health, agricultural robotics, etc.
Biography: Mani Srivastava is on the faculty at UCLA where he is a Distinguished Professor in the ECE Department with a joint appointment in the CS Department and is the Vice Chair for Computer Engineering. His research is broadly in the area of human-cyber-physical and IoT systems that are learning-enabled, resource-constrained, and trustworthy. It spans problems across the entire spectrum of applications, architectures, algorithms, and technologies in the context of systems and applications for mHealth, sustainable buildings, and smart built environments. He is a Fellow of both the ACM and the IEEE.
Acoustic-Based Active & Passive Sensing and Applications
Date: November 15, 2022
Abstract: Video games, Virtual Reality (VR), Augmented Reality (AR), Smart appliances (e.g., smart TVs and drones) and online meetings all call for a new way for users to interact and control them. Motivated by this observation, we have developed a series of novel acoustic sensing technologies by transmitting specifically designed signals and/or using signals naturally arising from the environments. We further develop a few interesting applications on top of our motion tracking technology.
Biography: Dr. Lili Qiu is Assistant Managing Director of Microsoft Research Asia and is mainly responsible for overseeing the research, as well as the collaboration with industries, universities, and research institutes, at Microsoft Research Asia (Shanghai). Before joining Microsoft Research Asia, Dr. Qiu was a professor of computer science at the University of Texas at Austin. Dr. Qiu is a world-leading expert in the field of Internet and mobile wireless networks. She obtained her MS and PhD degrees in computer science from Cornell University, and had worked at Microsoft Research Redmond as a researcher in the System & Networking Group from 2001-2004. Dr. Qiu is an ACM Fellow, IEEE Fellow and serves as the ACM SIGMOBILE chair. She was also the recipient of the NSF CAREER award, Google Faculty Research Award, and best paper awards at ACM MobiSys’18 and IEEE ICNP’17.
Edge Video Services on 5G Infrastructure
Date: November 15, 2022
Abstract: Creating a programmable software infrastructure for telecommunication operations promises to reduce both the capital expenditure and the operational expenses of the 5G telecommunications operators. The convergence of telecommunications, the cloud, and edge infrastructures will open up opportunities for new innovations and revenue for both the telecommunications industry and the cloud ecosystem. This talk will focus on video, the dominant traffic type on the Internet since the introduction of 4G networks. With 5G, not only will the volume of video traffic increase, but there will also be many new solutions for industries, from retail to manufacturing to healthcare and forest monitoring, infusing deep learning and AI for video analytics scenarios. The talk will touch up on various advances in edge video analytics systems including real-time inference over edge hierarchies, continuous learning of models, and privacy preserving video analytics.
Biography: Ganesh Ananthanarayanan is a Principal Researcher at Microsoft. His research interests are broadly in systems & networking, with recent focus on live video analytics, cloud computing & large scale data analytics systems, and Internet performance. He has published over 30 papers in systems & networking conferences such as USENIX OSDI, ACM SIGCOMM and USENIX NSDI, which have been recognized with the Best Paper Award at ACM Symposium on Edge Computing (SEC) 2020, CSAW 2020 Applied Research Competition Award (runner-up), ACM MobiSys 2019 Best Demo Award (runner-up), and highest-rated paper at ACM Symposium on Edge Computing (SEC) 2018. His work on “Video Analytics for Vision Zero” on analyzing traffic camera feeds won the Institute of Transportation Engineers 2017 Achievement Award as well as the “Safer Cities, Safer People” US Department of Transportation Award. He has collaborated with and shipped technology to Microsoft’s cloud and online products like the Azure Cloud, Cosmos (Microsoft’s big data system), Azure Live Video Analytics, and Skype. He was a founding member of the ACM Future of Computing Academy. Prior to joining Microsoft Research, he completed his Ph.D. at UC Berkeley in Dec 2013, where he was also a recipient of the UC Berkeley Regents Fellowship, and prior to his Ph.D., he was a Research Fellow at Microsoft Research India.
Smart Surfaces for NextG and Satellite mmWave and Ku-Band Wireless Networks
Date: November 1, 2022
Abstract: To support faster and more efficient networks, mobile operators and service providers are bringing 5G millimeter wave (mmWave) networks indoors. However, due to their high directionality, mmWave links are extremely vulnerable to blockage by walls and obstacles. Meanwhile, the first low earth orbit satellite networks for internet service have recently been deployed and are growing in size, yet will face deployment challenges in many practical circumstances of interest. To address both such challenges, the Princeton Advanced Wireless Systems lab is exploiting advances in artificially-engineered metamaterials to enable steerable wireless mmWave and Ku band beam reflection and refraction. Our approaches fall under the category of Huygens metamaterials, but our advances enable practical electronic control and simultaneous use in multiple frequency bands in such materials, for the first time. We have specified our designs in RF simulators and prototyped in hardware, and our experimental evaluation demonstrates up to 20 dB SNR gains over environmental paths in an indoor office environment.
Biography: Kyle Jamieson is Professor of Computer Science and Associated Faculty in Electrical and Computer Engineering at Princeton University. His research focuses on mobile and wireless systems for sensing, localization, and communication, and on massively-parallel classical, quantum, and quantum-inspired computational structures for NextG wireless communications systems. He received the B.S. (Mathematics, Computer Science), M.Eng. (Computer Science and Engineering), and Ph.D. (Computer Science, 2008) degrees from the Massachusetts Institute of Technology. He then received a Starting Investigator fellowship from the European Research Council, a Google Faculty Research Award, and the ACM SIGMOBILE Early Career Award. He served as an Associate Editor of IEEE Transactions on Networking from 2018 to 2020. He is a Senior Member of the ACM and the IEEE.
Creating the Internet of Biological and Bio-inspired Things
Date: October 4, 2022
Abstract: Living organisms can perform incredible feats. Plants like dandelions can disperse their seeds over a kilometer in the wind, and small insects like bumblebees can see, smell, communicate, and fly around the world, despite their tiny size. Enabling some of these capabilities for the Internet of things (IoT) and cyber-physical systems would be transformative for applications ranging from large-scale sensor deployments to micro-drones, biological tracking, and robotic implants. In this talk, I will explain how by taking an interdisciplinary approach spanning wireless communication, sensing, and biology, we can create programmable devices for the internet of biological and bio-inspired things. I will present the first battery-free wireless sensors, inspired by dandelion seeds, that can be dispersed by the wind to automate deployment of large-scale sensor networks. I will then discuss how integrating programmable wireless sensors with live animals like bumblebees can enable mobility for IoT devices, and how this technique has been used for real-world applications like tracking invasive “murder” hornets. Finally, I will present an energy-efficient insect-scale steerable vision system inspired by animal head motion that can ride on the back of a live beetle and enable tiny terrestrial robots to see.
Biography: Shyam Gollakota is a Washington Research Foundation Endowed Professor at the Paul G. Allen School of Computer Science & Engineering in the University of Washington. His work has been licensed and acquired by multiple companies and is in use by millions of users. His lab also worked closely with the Washington Department of Agriculture to wireless track the invasive “murder” hornets, which resulted in the destruction of the first nest in the United States. He is the recipient of the ACM Grace Murray Hopper Award in 2020 and recently named as a Moore Inventor Fellow in 2021. He was also named in MIT Technology Review’s 35 Innovators Under 35, Popular Science ‘brilliant 10’ and twice to the Forbes’ 30 Under 30 list. His group’s research has earned Best Paper awards at MOBICOM, SIGCOMM, UbiComp, SenSys, NSDI and CHI, appeared in interdisciplinary journals like Nature, Nature Communications, Nature Biomedical Engineering, Science Translational Medicine, Science Robotics and Nature Digital Medicine as well as named as a MIT Technology Review Breakthrough technology of 2016 as well as Popular Science top innovations in 2015. He is an alumni of MIT (Ph.D., 2013, winner of ACM doctoral dissertation award) and IIT Madras.
Decoding Hidden Worlds: Wireless & Sensor Technologies for Oceans, Health, and Robotics
Date: September 13, 2022
Abstract: As humans, we crave to explore hidden worlds. Yet, today’s technologies remain far from allowing us to perceive most of the world we live in. Despite centuries of seaborne voyaging, more than 95% of our ocean has never been observed or explored. And, at any moment in time, each of us has very little insight into the biological world inside our own bodies. The challenge in perceiving hidden worlds extends beyond ourselves: even the robots we build are limited in their visual perception of the world. In this talk, I will describe new technologies that allow us to decode areas of the physical world that have so far been too remote or difficult to perceive. First, I will describe a new generation of underwater sensor networks that can sense, compute, and communicate without requiring any batteries; our devices enable real-time and ultra-long-term monitoring of ocean conditions (temperature, pressure, coral reefs) with important applications to scientific exploration, climate monitoring, and aquaculture (seafood production). Next, I will talk about new wireless technologies for sensing the human body, both from inside the body (via batteryless micro-implants) as well as from a distance (for contactless cardiovascular and stress monitoring), paving the way for novel diagnostic and treatment methods. Finally, I will highlight our work on extending robotic perception beyond line-of-sight, and how we designed new RF-visual primitives for robotics - including sensing, servoing, navigation, and grasping - to enable new manipulation tasks that were not possible before. The talk will cover how we have designed and built these technologies, and how we work with medical doctors, climatologists, oceanographers, and industry practitioners to deploy them in the real world. I will also highlight the open problems and opportunities for these technologies, and how researchers and engineers can build on our open-source tools to help drive them to their full potential in addressing global challenges in climate, health, and automation.
Biography: Fadel Adib is an Associate Professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science. He is the founding director of the Signal Kinetics group which invents wireless and sensor technologies for networking, health monitoring, robotics, and ocean IoT. He is also the founder & CEO of Cartesian Systems, a spinoff from his lab that focuses on mapping indoor environments using wireless signals. Adib was named by Technology Review as one of the world’s top 35 innovators under 35 and by Forbes as 30 under 30. His research on wireless sensing (X-Ray Vision) was recognized as one of the 50 ways MIT has transformed Computer Science, and his work on robotic perception (Finder of Lost Things) was named as one of the 103 Ways MIT is Making a Better World. Adib’s commercialized technologies have been used to monitor thousands of patients with Alzheimer’s, Parkinson’s, and COVID19, and he has had the honor to demo his work to President Obama at the White House. Adib is also the recipient of various awards including the NSF CAREER Award(2019), the ONR Young Investigator Award (2019), the ONR Early Career Grant (2020), the Google Faculty Research Award (2017), theSloan Research Fellowship (2021), and the ACM SIGMOBILE Rockstar Award (2022), and his research has received Best Paper/Demo Awards at SIGCOMM, MobiCom, and CHI. Adib received his Bachelors from the American University of Beirut (2011) and his PhD from MIT (2016), where his thesis won the Sprowls award for Best Doctoral Dissertation at MIT and the ACM SIGMOBILE Doctoral Dissertation Award.