(Updated on Nov 13, 2022: ICONIP will go fully as a virtual conference)
Description
The wider adoption of machine learning (ML) and artificial intelligence (AI) make several applications successful across societies such as healthcare, finance, robotics, transportation and industry operations by inducing intelligence in real-time [1-2]. Designing, developing and deploying reliable, robust, and secure ML algorithms are desirable for building trustworthy systems that offer trusted services to users with high-stakes decision making [2-4]. For instance, AI-assisted robotic surgery, automated financial trading, autonomous driving and many more modern applications are vulnerable to concept drifts, dataset shifts, misspecifications, misconfiguration of model parameters, perturbations, and adversarial attacks beyond human or even machine comprehension level, thereby posing dangerous threats to various stakeholders at different levels. Moreover, building trustworthy AI systems requires lots of research efforts in addressing different mechanisms and approaches that could enhance user and public trust. To name a few, the following topics are known to be topics of interest in trustworthy and secure AI, but are not limited to: (i) bias and fairness, (ii) explainability, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building, (v) being decent, (vi) model attribution and (vii) scalability of the model under adversarial settings [1-5]. All of these topics are important and need to be addressed.
This special session aims to draw together state-of-the-art advances in machine learning (ML) to address challenges for ensuring reliability, security and privacy in trustworthy systems. The challenges in different learning paradigms are including, but are not limited to (i) robust learning, (ii) adversarial learning, (iii) stochastic, deterministic and non-deterministic learning, and (iv) secure and private learning. Nonetheless, all aspects of learning algorithms that can deal with reliable, robust and secure issues are the focus of the special session. It will focus on robustness and performance guarantee, as well as, consistency, transparency and safety of AI which is vital to ensure reliability. The special session will attract analytics experts from academics and industries to build trustworthy AI systems by developing and assessing theoretical and empirical methods, practical applications, and new ideas and identifying directions for future studies. Original contributions, as well as comparative studies among different methods, are welcome with an unbiased literature review.
Robustness of machine learning/deep learning/reinforcement learning algorithms and trustworthy systems in general.
Confidence, consistency, and uncertainty in model predictions for reliability beyond robustness.
Transparent AI concepts in data collection, model development, deployment and explainability.
Adversarial attacks - evasion, poisoning, extraction, inference, and hybrid.
New solutions to make a system robust and secure to novel or potentially adversarial inputs; to handle model misspecification, corrupted training data, addressing concept drifts, dataset shifts, and missing/manipulated data instances.
Theoretical and empirical analysis of reliable/robust/secure ML methods.
Comparative studies with competing methods without reliable/robust certified properties.
Applications of reliable/robust machine learning algorithms in domains such as healthcare, biomedical, finance, computer vision, natural language processing, big data, and all other relevant areas.
Unique societal and legal challenges facing reliability for trustworthy AI systems.
Secure learning from data having high missing values, incompleteness, noisy
Private learning from sensitive and protected data
Submission of papers: 11:59pm (AoE), July 07, 2022 (extended)
Acceptance notification: 11:59pm (AoE), August 15, 2022
Camera ready: 11:59pm (AoE), August 31, 2022
Conference date: 11:59pm (AoE), November 22-26, 2022
Submission guideline: please follow the guideline here.
Latex Template: Overleaf
Submission page: https://easychair.org/conferences/?conf=iconip2022
Title: Towards Reliable and Robust Deep Learning with Neurosymbolic Computing.
Biography: Dr. Son Tran is a lecturer in computing at the University of Tasmania, Australia. He obtained his Ph.D in Computer Science at City, University of London, the United Kingdom in 2016. His research focuses on both theoretical Artificial Intelligence, i.e. bridging the gap between Connectionism and Symbolism, and applications of (deep) Neural Networks for various tasks in computer vision and natural language processing. He has publications in flagship AI conferences and journals such as IJCAI, KR, IJCNN, ECIR, SIGIR, IEEE TNNLS, ACM TMM. Dr. Tran is also a regular reviewer/PC member of IJCAI, AAAI, ACL, IEEE TNNLS.
Abstract: Deep learning has been showing great success in a wide range of applications, thanks to the ability to learn from big data at a large scale. To enable more adoption of machine learning in general and deep learning in particular, increasing attention has been raised for improving the reliability and robustness of deep learning. One potential approach is neurosymbolic computing where symbolic knowledge can be integrated with deep neural networks in an efficient and effective way for transparency and generalisation. The advantage of neurosymbolic is that while the power of learning from data can be bolstered with additional knowledge a model can make decisions with logical reasoning and context-awareness. In this talk, I will reintroduce the principles of neurosymbolic computing and discuss its path toward reliable and robust machine learning.
ICONIP 2022 will be held physically, during November 22-26, 2022.