Explainable AI and Content Security in Healthcare Applications and Beyond
by Dr. Zulfiqar Ali
18 April 2024 10.00 a.m. - 12.00 p.m.
at Dongyang Meeting Room 3, Faculty of Engineering, Prince of Songkla University, Thailand
by Dr. Zulfiqar Ali
18 April 2024 10.00 a.m. - 12.00 p.m.
at Dongyang Meeting Room 3, Faculty of Engineering, Prince of Songkla University, Thailand
ABSTRACT
Explainable AI (XAI) refers to the development of AI systems whose actions and decisions can be easily understood by humans. In the context of health applications and beyond, explainability is crucial for ensuring transparency, accountability, and trust in AI systems, especially when they are involved in making decisions that can impact human lives.
In health applications, such as medical diagnosis or treatment recommendation systems, explainable AI can help healthcare professionals understand why a certain diagnosis or treatment suggestion was made by the AI system. This understanding is essential for doctors to trust and rely on AI-driven insights and recommendations in their decision-making process. It also allows them to verify the validity of AI-generated conclusions and potentially identify errors or biases in the system.
Content security is another critical aspect, particularly in health applications where sensitive patient data is involved.
In this talk, both explainable AI and content security will be discussed in the context of vocal fold disorder assessment (10.1109/ACCESS.2017.2680467), zero-watermarking for medical signals (audio and image) (https://doi.org/10.1016/j.future.2019.01.050, https://doi.org/10.3390/electronics11050710), and imposter detection in forged audio (https://doi.org/10.1016/j.compeleceng.2021.107122).