(I) AI Security & Privacy
(I) AI Security & Privacy
(1) Privacy-preserving Machine Learning
Due to the need for large training datasets to enhance the effectiveness of machine learning systems, these training datasets, which may include biological data, government data, financial data, medical data, and more, often contain personal privacy or sensitive information. Protecting the privacy of these datasets (the de-identification process is akin to pixelating sensitive data) without compromising the accuracy and processing time of machine learning is a significant research challenge. For instance, during the fight against the COVID-19 pandemic, balancing privacy and the analyzability of tracking data is crucial. This study explores the theoretical foundations behind such privacy and cybersecurity issues and introduces corresponding security mechanisms. Currently, the best privacy protection technology is Differential Privacy, which is already applied in Apple's iPhones, Google's Chrome browser, and Microsoft's Windows crash reports. Both academia and industry have invested considerable resources in researching this field.
(2) Security issues of LLM
Currently, applications of large language models (LLMs) such as ChatGPT have become popular tools. However, they also bring numerous cybersecurity issues. These include privacy concerns arising from the vast amount of training data, the risk of the models themselves being maliciously attacked (e.g., prompt injection attacks), the potential for these models to be used in rewriting and generating new types of malware (ChatGPT-related malware variants), and other previously unforeseen cybersecurity problems. With the rapid proliferation of tools like ChatGPT, these issues can lead to serious consequences and impacts. Therefore, our research focuses on addressing the new types of cybersecurity challenges posed by the rise of large language models.
(*picture source: OpenAI)
(3) Machine Learning with Adversarial Example
Recent research has indicated that deep learning models can produce unstable results and may be exploited maliciously, raising security concerns. This has become a prominent topic in machine learning research. An "adversarial example" refers to deliberately crafted input data designed to cause a machine learning model to make incorrect predictions. This phenomenon was first identified by Christian Szegedy et al. in 2013. They found that image recognition models trained on datasets like ImageNet, MNIST, and AlexNet could be significantly affected by minor changes to the input. For instance, an image that a model initially identifies correctly can be misclassified by altering just a few pixels. These changes are so slight that they are almost imperceptible to the human eye, yet they cause the model to produce drastically different results.
(*picture source: Google AI Blog)
(II) Intrusion Detection
(1) IoT and Mobile Device Security
With the growing importance of the Internet of Things (IoT), it is becoming a new target for hacker attacks due to its extensive use of sensors and data collection capabilities. Unlike mobile devices, IoT devices are often placed in easily accessible locations, making them more vulnerable to attacks. Therefore, addressing IoT security issues is a crucial future challenge. Moreover, with the popularity of smartphones, mobile devices contain more critical personal information compared to traditional personal computers, such as photos, location data, and phone numbers. This makes them more attractive targets for hackers or spyware. Protecting the information security of smartphones has become a primary focus for industry professionals and researchers. Additionally, in terms of privacy security, mobile devices are closely tied to personal lives. Many apps aim to collect extensive user data to offer more convenient and intelligent services. However, this often involves uploading significant amounts of personal information (e.g., location data) to the service providers' servers, which can reduce users' willingness to use these apps or services. Consequently, developing solutions that maintain personal data privacy while providing smart services has become a key research area for major software companies.
(2) Malware/Threat Intrusion Detection
Due to the presence of numerous zero-day and rapidly evolving malware in the current network environment, defenders often struggle to collect a large number of malicious samples promptly. To address this issue, many recent defense studies have started using few-shot learning methods. Current defense mechanisms include static malware analysis and dynamic malware analysis. The key difference between the two is that static analysis is faster, whereas dynamic analysis is better at resisting code obfuscation and polymorphic malware, which can change their code with each replication. Our research focuses on proposing the most suitable solutions for different application environments.
(III) Multimedia Security
Multimedia Security
With the emergence of various new multimedia environments, such as VR/AR devices, there often aren't adequate privacy and security measures in place. This includes the protection of VR/AR users' private data, such as GPS location, eye movements, facial expressions, and other physiological data. Therefore, we are conducting related cybersecurity research in this area. In addition, we are also investigating potential new types of attack threats. For example, attackers might combine users' interactions in both the physical and virtual worlds to conduct more in-depth analysis and predictive attacks.
(IV) Applied Cryptography
Applied Cryptography and Digital Identity (Certificate) Management Technology
In today's digital world, identity verification and management technologies are built upon cryptography and related digital signatures and certificate technologies. Examples include government-issued personal certificates/corporate certificates, IoT certificates for IoT applications, vehicular network certificates for smart autonomous vehicles, server certificates for servers, and communication certificates for encrypted communication in mobile apps. These are all used as the basis for various forms of digital identity authentication. However, it is evident that if cryptographic technologies are compromised, hackers could forge personal identities and engage in illegal activities in the digital realm. Therefore, this field of research is often considered the most crucial infrastructure in the cyber world. It is also one of the areas where governments and commercial companies invest the most time and money.
(*picture source: NDC)
(V) Quantum Security
Post-Quantum Cryptography
In recent years, cyberattacks have become increasingly frequent, making cyberattack and defense issues more critical and even a measure of national military strength. The cultivation of cybersecurity and related talent has become a key focus. Consequently, governments worldwide, including Taiwan (facing threats from mainland China's cyber army), have invested substantial research funding in cybersecurity issues. The advent of quantum computers poses a significant threat to high-security cryptographic systems currently used by governments and military units, such as Public Key Infrastructure (PKI). As a result, national entities are providing substantial research funding for related topics to enhance the information security levels of government and military operations.
(*picture source: NIST)