Keynotes

Edge AI for Wildlife Conservation

Abstract:

Biodiversity is declining globally at unprecedented rates. We need to monitor species in real time and in greater detail to quickly understand which conservation efforts are most effective and take corrective action. Current ecological monitoring systems generate data far faster than researchers can analyze it, making scaling up impossible without automated data processing. However, ecological data collected in the field presents a number of challenges that current methods, like deep learning, are not designed to tackle. Biodiversity data is correlated in time and space, resulting in overfitting and poor generalization to new sensor deployments. Environmental monitoring sensors have limited intelligence, resulting in objects of interest that are often too close/far, blurry, or in clutter. Further, the distribution of species is long-tailed, which results in highly-imbalanced datasets. Finally, there exist significant bandwidth bottlenecks from the field to the cloud that significantly slow down processing times. Human-AI systems that incorporate edge-based processing are needed for time-sensitive challenges like mitigating human-wildlife conflict, reducing poaching, and monitoring escapement to ensure sustainable fishing.


Speaker: Sara Beery, Caltech

Sara Beery is a senior PhD candidate in Computing and Mathematical Sciences at Caltech. Her research focuses on computer vision for conservation and sustainability challenges. Sara was awarded both PIMCO Data Science Fellowship and the Amazon AI4Science Fellowship, which recognise her remarkable impact on machine learning, data science and their application. She has been actively engaged in research communities as invited speakers and panellists in prestigious venues including CVPR, NeurIPS, ICML, ICCV, and WACV.

National Strategies and Investment Trends in Edge AI

Abstract:

According to Gartner, Edge AI is at the peak of “inflated expectations” on the Hype Cycle but likely to sail through the trough of disillusionment and reach the plateau of productivity in the next 2 to 5 years. It is hard to define the strategies and investment trends in Edge AI as it can have such a broad definition and impact across all sectors. A reasonable proxy is Edge Computing, whose products, services and solutions have an estimated market size of $1.5 billion in 2022 in the Asia-Pacific alone with a CAGR of 40% (TechTarget, 2019). Edge computing is becoming mainstream, while only 10% of companies currently process data on the edge, by 2025 GlobalData predicts 75% of companies will rely on edge computing.

In Australia we have a track record of producing novel edge AI solutions to overcome the barriers created by remoteness, where there are significant advantages to process data from sensors on the edge to overcome challenges of poor communications. Several roadmaps point to this trend, including the Robotics Roadmap for Australia (2018 & 2022), the Australian Space Agency’s Robotics and Automation on Earth and in Space Roadmap (2021-2030) and Defence Science through their Next Generation Technology Fund identified several technologies that would rely on Edge AI for successful development and deployment.

General investment trends in Edge AI in Australia (and New Zealand) are difficult to unpick, what constitutes an edge AI company? Indeed what constitutes an AI company? Using broad definitions, investment from publicly reportable sources since 2010 in AI companies in ANZ appears to be on the order of $2.5 billion, with at least 20% of that amount being invested in IoT application companies, a good proxy for Edge AI. Some of the most successful companies (in capital raising) include Nuheara, Global Kinetics, The Yield, Catapult Sports, with the standout performer being Canberra-based Seeing Machines, a provider of AI-enabled driver monitoring solutions.


Speaker: Dr. Sue Keay, Robotics Australia Group

Dr Sue Keay, an experience R&D leader with a focus on disruptive technologies, is one of the Australia’s most influential people in artificial intelligence and robotics. She is currently the founder and chair of Robotics Australia Group representing the robotics industry in Australia, a fellow of the Australian Academy of Technology and Engineering (ATSE), and an adjunct Professor at QUT. Her expertise in national leadership, directing and ensuring impacting is evident by her recent leadership roles in R&D programs such as Queensland AI Hub, Cyper-Physical System in Data61 CSIRO, Australian Centre for Robotic Vision, etc.

Efficient Neural Architecture Search

Abstract:

In this talk, I will introduce our recent work on neural architecture search. To begin with, I will briefly introduce the challenges of neural architecture search. I will summarize the existing approaches and their corresponding limitations. After that, I will introduce our proposed approach to modularize the large search space of NAS into blocks to ensure that the potential candidate architectures are fully trained. This reduces the representation shift caused by the shared parameters and leads to the correct rating of the candidates. Then, we will talk about dynamic slimmable network, which aims to achieve good hardware-efficiency via dynamically adjusting filter numbers of networks at test time with respect to different inputs, while keeping filters stored statically and contiguously in hardware to prevent the extra burden. Finally, I will introduce an unsupervised NAS method that addresses the problem of in- accurate architecture rating caused by large weight-sharing space and biased supervision in previous methods.


Speaker: Prof. Xiaojun Chang, University of Technology Sydney

Dr Xiaojun Chang is a Professor in Australian Artificial Intelligence Insitute, Faculty of Engineering and Information Technology, University of Technology Sydney. He is also an Honorary Professor in the School of Computing Technologies, RMIT University, Australia. He has focused his research on exploring multiple signals (visual, acoustic, textual) for automatic content analysis in unconstrained or surveillance videos. Xiaojun’s team has won multiple prizes from international grand challenges which hosted competitive teams from MIT, University of Maryland, Facebook AI Research (FAIR) and Baidu VIS, and aim to advance visual understanding using deep learning.

Intelligent Sensing Devices in Real World Applications

Abstract:

Intelligent sensing devices combine sensors, intelligent algorithms, and communications into powerful tools for real-time exploration of the physical world. Machine learning algorithms that run directly on sensing devices have revolutionized sensing applications over the last few years. Embedded AI/ML methods can reduce the complex data collected by multi-media or environmental sensors into dense and actionable information that can be communicated from remote locations in an energy-efficient manner. However, the landscape of sensing devices has increased significantly over the last decade, ranging from tiny battery-less sensors operating with micro watt energy budgets, to powerful machine learning hardware accelerators capable of analyzing high-definition video streams. This talk will categorize the sensing landscape that exists today and provide examples of machine learning algorithms and their applications for each class of intelligent sensing devices.


Speaker: Dr. Brano Kusy, CSIRO DATA61

Dr. Brano Kusy is a principal research scientist and group Leader of the Distributed Sensing Systems in Data61, CSIRO. His research is on the new frontiers in networked embedded systems, mobile and wearable computing, and Internet of Things. His work has focused on scalability and energy efficiency of resource-constrained distributed systems and algorithms for coordinated control, spatio-temporal synchronization, reliable wireless communications, on-device machine learning, and adaptive sampling. Brano is an internationally respected scientist. He has served as the General Chair of IPSN’20, is regularly involved in the Program Committees of top-ranked international conferences ACM SenSys, ACM/IEEE IPSN, EWSN, IEEE MASS, IEEE ICDCS and has served on the TinyOS Core Working Group.

Knowledge Distillation and Model Quantisation

Abstract:

We consider transferring the structure information from large networks to compact ones for dense prediction tasks in computer vision. Previous knowledge distillation strategies used for dense prediction tasks often directly borrow the distillation scheme for image classification and perform knowledge distillation for each pixel separately, leading to sub-optimal performance. Here we propose to distill structured knowledge from large networks to compact networks, taking into account the fact that dense prediction is a structured prediction problem. Specifically, we study two structured distillation schemes: i) pair-wise distillation that distills the pair-wise similarities by building a static graph; and ii) holistic distillation that uses adversarial training to distill holistic knowledge. The effectiveness of our knowledge distillation approaches is demonstrated by experiments on three dense prediction tasks: semantic segmentation, depth estimation and object detection.


Speaker: Prof. Chunhua Shen, Zhejiang University

Professor Chunhua Shen is currently a professor of computer science at the Zhejiang University. His research interests are computer vision and pattern recognition. He was awarded an ARC Future Fellowship in 2012. Professor Shen was listed as a top researcher in the Australian’s Lifetime Achievers Leaderboard. He has received 35,000+ citations with high-quality papers in prestigious conferences and journals including CVPR, ICCV, ECCV, ICML, NeurIPS, TPAMI, IJCV, JMLR, TOG.

Imaging and Computer vision – MLAI FSP

Abstract:

Imaging and Computer vision – MLAI FSP


Speaker: Dr. Lars Petersson, CSIRO

Dr Lars Petersson is a group Leader and senior principal research scientist in the Imaging and Computer Vision Group, Data61, CSIRO, Australia. He is leading a group of 30+ staff of researchers and engineers developing computer vision technologies useful in real world commercial applications while also pushing the boundaries of state-of-the-art research via publications in top-tier venues.

AI for Koala Road Crossing Behaviour Monitoring

Abstract:

From 1997 to 2018, an average of 356 koalas entered care facilities due to vehicle collisions each year. Mitigating koala fatalities and injuries caused by vehicles is one of the most important tasks for koala conservation. This requires a deeper understanding and a better prediction of koala road crossing behaviour, which relies on koala monitoring and tracking technology. This talk introduces a pilot study to expand the pool of koala monitoring technology, applying AI-powered observation networks to investigate koala road crossing behaviours. A network of interconnected devices has been deployed in the vicinity of road crossing structures, with each device integrating a camera, a motion sensor, a wireless/mobile network module, and a solar panel. An animal movement triggers image capture, with images transferred to a server at Griffith University. Computer vision and machine learning systems are then used to process images, allowing for automatic detection and recognition of individual koalas and other animals. These data will be further analysed to provide a greater understanding of koala use of crossing structures and help to design and optimise the location of fauna mitigation measures on roads.


Speaker: A/Prof. Jun Zhou, Griffith University

Dr Jun Zhou is an associate professor in School of Information and Communication Technology in Griffith University. His research focuses on hyperspectral imaging, computer vision, pattern recognition and their applications to remote sensing, agriculture, environment, and medicine. Dr Jun is also the deputy director of ARC Hub for Driving Farming Productivity and Disease Prevention. He was a recipient of the Australian Research Council Discovery Early Career Researcher Award in 2012.

Edge AI @ Google - TensorFlow Lite

Abstract:

TensorFlow Lite is a framework to deploy machine learning models on edge devices. It's already used in billions of devices all around the world. In this session, you'll learn about some Edge AI use cases that can be implemented with TensorFlow Lite, and how to use them in your applications. You'll also learn about advanced model optimization techniques, such as quantization and weight pruning, that are used to shrink machine learning models to fit into edge devices.


Speaker: Khanh LeViet, TensorFlow Developer Advocate, Google Inc.

Khanh LeViet is a TensorFlow Developer Advocate at Google, helping developers to create amazing applications with Edge AI. He speaks at technology conferences, writes and publishes sample code on GitHub. Before his journey with AI, Khanh was a mobile developer working on both Android and iOS.