Project

We have been involved in numerous data mining & machine learning research and industrial projects. We are especially interested in working with companies or organizations with large data that are in need of data mining or machine learning solutions. Please, contact me to determine if such technology would be beneficial to your organization. The followings are examples of recent projects.


[생성AI, trustworthy LLM]

[추천시스템]

[시계열, 제조 데이터, 비정상 탐지]

Below is the list of representative national research projects.

Development of MULTI-LINGUAL ARTIFICIAL INTELLIGENCE NEWS AGENT(BASIC RESEARCH LAB, brl)

2023.06.01 - 2026.02.28 (33 months). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No.RS-2023-00217286 )

In this project, we aim to develop a novel multi-lingual artificial intelligence news agent, Milaina, which provides reliable news with various perspectives on a single issue to suit the needs of each individual. Compared to existing news platforms, Milaina has the following strengths

1) Multi-lingual news analysis: It collects, translates, and analyzes multi-lingual news about one issue. It improves the translation quality by improving the tone and automatically generates a summary. By considering news from multiple countries on an issue at the same time, users can get more objective fact-checking. 

2) Reliable News: By utilizing multi-lingual news analysis and relationship networks, it analyzes causality to measure news reliability, detect fake news, and generate reliable personalized workbooks. Furthermore, we will develop more sophisticated fake news detection technology by utilizing additional information such as the relationship, stance, and nationality of multinational news sources.

3) News analysis and personalized news curation: We analyze the issues and opinions covered by the news and use the analysis results for user modeling to provide personalized news recommendations. By using a quantitative understanding of each news item and a criterion function designed based on multinational news, it is possible to curate news that contains diverse opinions while considering user preferences.

Finally, we will publish a multi-lingual news agent system demo and API that reflects the technology, making the underlying technology easily accessible and usable by individual companies that do not have the resources or expertise to effectively combat fake news. We will also publish research papers gained through this project. 

Click here for more information: Milaina

Development of Decision Support System SW based on Next-Generation ML (SW StarLab) (*2021 국가연구개발 우수성과 100선)

2018.04.01 - 2025.12.31 (8 years). This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00584)


In this project, we aim to develop a novel decision support system, METIS, which is short for ML-based Decision Support Information System. Based on the next-generation machine learning, it provides selected information that users really need (i.e., supports users’ decisions) by incorporating a variety of heterogeneous data into a unified modeling framework. Our decision support system has four important and differentiated features compared to existing systems and generic technologies. (1) It supports easy and effective modeling on large-scale heterogeneous data based on the integrated modeling framework. It flexibly constructs models suitable for target domains, target applications, target services, and target data and learns them efficiently. (2) It manages dynamic data and models by using incremental learning. In other words, it efficiently reflects the data accumulated over time in the model at the previous time-stamp. (3) It preserves the privacy of user data by using federated model learning. It trains global models and improves their performances by considering a lot of user data without accessing users’ local data. (4) It provides good scalability and efficiency by fully utilizing limited resources. In terms of model learning, it efficiently processes large-scale data which conventionally can not be handled in a local device, where computation and memory resources are limited.


Click here for more information: METIS

Development of Integration and Inference Technology over Web-scale Complex Data

This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7063570)


Data on the web are not only large-scale, but also extremely high-dimensional, highly multi-class and heterogeneous types. In addition, most web data are dynamically changing over time and are not structured. We define data with such characteristics as “Big Complex Type Data.” Since Big Complex Type Data on the web consist of various information, it is challenging to analyze them. Thanks to the recent technology improvements, the methods to analyze Big Complex Type Data are now being available. Our goal is to invent strong methods for mining Big Complex Type Data in order to help develop the technologies to extract information from Big Complex Type Data, to integrate the extracted information and to generate knowledge based on inference on the integrated data.

MELOW: Machine learning framework for Embedded LOW-power system

2016.12.01~2024.2.28 (8 years). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No.2016R1E1A1A01942642).


This project aims for development of machine learning technology optimized for low-power embedded system. Currently researchers tend to use complex models to obtain highly accurate models via machine learning from big data. However, the training process consumes more power as the size of model increases, which becomes the main obstacle for utilizing machine learning in embedded system with limited power. Therefore, this project suggests developing machine learning technology minimizing power consumption during training while keeping accuracy of the model by exploiting various techniques such as model compression, utilizing various computing resources(flash memory, GPU), and ultimately the framework (MELOW: Machine laerning for Embedded LOW-power system)combining those methodologies. Furthermore, this project will show the practicability of MELOW by developing machine learning applications running on the MELOW framework.

Development of Enabling Software Technology for Big Data Mining

2012.07.01 - 2017.06.30 (5 years). This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (No. 2012M3C4A7033344) 


The goal of this project is the development of enabling software technologies for big data mining. Through this project, we will research data mining techniques for big data in natural sciences and social networks. We will also develop personalized service technologies based on unstructured big data analysis and customer behavior models. Furthermore, we will produce well-trained software engineers who are experts in big data mining.

Developing Search and Mining Technologies for Mobile Devices (*우수 국과 과제 선정)

2011.05.01 - 2014.04.30 (3 years). This work was supported by the Brain Korea 21 Project in 2010 and Mid-career Researcher Program through NRF grant funded by the MEST (No. KRF-2011-0016029).

 

Combining the highly profitable information search industry and the mobile computing paradigm, mobile information search industry has been growing rapidly despite the global economy recession. Thus, development of mobile search technology will impact on the economy positively. This project aims at advancing the technologies in the areas of mobile search and mining, low-power consumption utility mining, and mining for mobile online advertizing.

User-Friendly Search Engine for PubMed

2009.05.01 - 2012.02.28 (3 years). This work was supported by the Brain Korea 21 Project and Mid-career Researcher Program through NRF grant funded by the MEST (No. KRF-2009-0080667).


PubMed MEDLINE, a database of biomedical and life science journal articles, is one of the most important information source for medical doctors and bio-researchers. Finding the right information from the MEDLINE is nontrivial because it is not easy to express the intended relevance using the current PubMed query interface, and its query processor focuses on fast matching rather than accurate relevance ranking. This project develop techniques for building a user-friendly MEDLINE search engine.

Novel Recommendation for Digital TV

2011.02.01 - 2012.01.31 (1 years). This work was supported by Samsung Electronics.


Existing recommendation systems (e.g., the Netflix competition) focus on an accurate prediction of purchase, as the systems are evaluated based on the prediction accuracy. However, such systems tend to recommend popular items. Recommending popular items, however, might not be effective or affective on users' purchase decisions, as users likely already know the items and likely have pre-made decisions on the purchase of items, e.g., recommend to watch Star Wars or Titanic. Effective recommendation must recommend unexpected or novel items that could surprise users and affect users' purchase decision. This project is to develop an effective recommendation for digital TV customers.

Feature Weighting for Ranking

This work is supported by MSRA (Microsoft Research Asia).

 

Feature weighting for ranking has not been researched as extensively as for classification.  This project develops various feature weighting methods for ranking by leveraging existing methods for classification. The developed methods are used on the Live Search query log data to identify key features that determine the users’ click-through behaviors. The developed methods will also be used to build the feature selection component of RefMed -- relevance feedback PubMed search engine.

Enabling Relevance Ranking in Databases for User-Friendly Data Retrieval

2008.07.01 - 2011.06.30 (3 years). This work is supported by the Brain Korea 21 Project and the Korea Research Foundation Grant funded by the Korean Government (KRF-2008-331-D00528).


Most online data retrieval systems, built based on relational database management systems(RDBMS), support fast processing of Boolean queries but offer little support for relevance or preference ranking. A unified support of Boolean and ranking constraints in a query is essential for user-friendly data retrieval. This project develops foundational techniques that enable such data retrieval systems in which users intuitively express ranking constraints and the system efficiently process the queries.

Development of Kernel Based Real-time Recommender System through Structured Web Data Analysis

With Prof. Jaewook Lee (PI), 2008.07.01 - 2010.06.31 (2 years). This work was supported by the Brain Korea 21 Project and the Korea Research Foundation Grant funded by the Korean Government (KRF-2008-314-D00483).

Support Vector Machine with Limited Memory

Support vector machines (SVMs) have been promising methods for classification, regression, ranking analysis due to their solid mathematical foundations, which include two desirable properties: margin maximization and nonlinear classification using kernels. However, despite these prominent properties, SVMs are usually not chosen for large-scale data mining problems because their training complexity is highly dependent on the data set size. Unlike traditional pattern recognition and machine learning, real-world data mining applications often involve a huge number of data records that does not fit in main memory and a multiple scans of the data set is often too expensive. Through this project, we developed techniques for approximately training SVMs in one scan of the database.


Representative Publications

SCC: Single-Class Classification or Classification Without Negative Examples

Single-Class Classification (SCC) seeks to distinguish one class of data from universal set of multiple classes. We call the target class positive and the complement set of samples negative. In SCC problems, it is assumed that a reasonable sample of the negative data is not available. Since it is not natural to collect the "non-interesting'' objects (i.e., negative data) to train the concept of the "interesting'' objects (i.e., positive data), SCC problems are prevalent in the real world where positive and unlabeled data are widely available but negative data are hard or expensive to acquire. We developed SCC algorithms which compute the boundary functions of the target class from positive and unlabeled data (without labeled negative data). The basic idea is to exploit the natural "gap'' between positive and negative data by incrementally labeling negative data from the unlabeled data using the margin maximization property of SVM. Our SCC algorithms build classification functions very close to the SVM with fully labeled data when the positive data is not much under-sampled.


Representative Publications

PP-SVM: Privacy Preserving Support Vector Machines

Data Mining has many applications in the real world. Classification is an important sub-class of problems found in a wide variety of situations. Fraud detection is one of the biggest and most important classification problems. Take the case of identifying fraudulent credit card transactions. Banks collect transactional information for credit card customers. Due to the growing threat of identity theft, credit card loss, etc. identifying fraudulent transactions can lead to annual savings of billions of dollars. Deciding whether a particular transaction is true or false is a classification problem. Another completely different, though equally important problem is in healthcare (e.g., diagnosis of disease). Many such problems abound. Currently, classifiers are run locally or over data collected at one central location (i.e., in a data warehouse).  The accuracy of a classifier usually improves with more training data.  Data collected from different sites is especially useful, since it provides a better estimation of the population than the data collected at a single site. However, privacy and security concerns restrict the free sharing of data. There are both legal and commercial reasons to not share data. For e.g., HIPAA laws require that medical data not be released without appropriate anonymization. Similar constraints arise in many applications; European Community legal restrictions apply to disclosure of any individual data. In commercial terms, data is often a valuable business asset. For example, complete manufacturing processes are trade secrets (although individual techniques may be commonly known). Thus, it is increasingly important to enable privacy-preserving distributed mining of information.


Support Vector Machine (SVM) classification is one of the most favorably used classification methodology in data mining and machine learning. SVMs have proven to be effective in many real-world applications.  This project develops secure SVM classification solutions for distributed data.


These solutions are based on our SVM Java implementation:


Representative Publications

Feature Discovery and Visualization in Support Vector Machines

Despite the popularity of SVMs in the data mining and machine learning communities, applying them to real world classification problems often confronts another obstacle, that is, the barrier of understanding and interpreting the results. For example, physicians may want to use the classification techniques of SVM for early diagnosis of diabetic patients. However, if the classification model generates the diagnosis result without an explanation of why or how, physicians may not appreciate or trust the result. As another example, pharmacologists are given an SVM model that accurately classifies active drugs from non-active drugs for a symptom, but the model may not be useful to them if it does not explain which components in the drug play the key roles.


Through this project, we developed techniques that discover discriminative feature combinations using SVM models. Our methods effectively capture the feature combinations on a drug activity dataset. We also developed Localized Radial Basis Function (L-RBF) kernels to visualize discriminative features for nonlinear SVM models. Our system captures and visualizes important factors for a disease, which presents valuable information to physicians.


Representative Publications