(With Caitlin in August 2014)
Director of Ajou FAST Lab
Department of Software and Computer Engineering
College of Information Technology
Email: youkim a.t. ajou.ac.kr
Office: No. 704 Paldal Hall
Address: 206 Worldcup-Ro, Yeongtong-Gu, Suwon-Si, Gyeonggi-Do, 443-749, Korea
Big data challenges include capture, storage, search, sharing, transfer, analysis, and visualization. Among these challenges, my research has been mainly focused on investigating the problems of Storage, Sharing and Transfer for Big Data and Cloud Platform. In my recent work, I identified multiple bottlenecks that exist along the end-to-end path from source to sink data storages, and I found data storage infrastructure at both the source and sink and its interplay with the network are increasingly the bottleneck to achieving high performance. I proposed an idea that exploits the underlying storage data layout at each endpoint, and implemented an I/O scheduler, called layout-aware data scheduler (FAST'15, TPDS'16). Moreover, I tackled the problems that existing file systems are not proper for data management services, because file system and data management services are decoupled from their design and implementation. Thus I explored the ideas of coupling data management services with file systems, looming data movement costs between them (ATC'16 poster). In addition, I did a lot of research on improving I/O performance for file and storage systems, and emerging storage technologies (TC'14, SC'15, FAST'13). Recently I begin to explore I/O and storage challenges for scalable training of deep neural networks to reduce long running training times on the slow storage systems. I pursue an investigation to implement cluster-wide deduplication techniques and application QoS service in a shared storage system (using Ceph file system). I'm also interested in I/O stack optimization in mobile OS platform (Android).
- (2015 - Present) Assistant Professor, Department of Software and Computer Engineering, Ajou University, Korea (2015-Present)
- (2009 - 2015) Research Staff Member, Oak Ridge National Laboratory for US Department of Energy, Oak Ridge, TN, USA
| (2014 - 2015)
||Team leader for Non-Volatile Memory File System Team, National Center for Computational Sciences
- Explored non-volatile memory devices from several perspectives including memory extension, burst buffers, out-of-core data analytics, and fault tolerance for extreme compute and data systems
| (2009 - 2015)
||Research Staff Member (책임연구원) |
- (2011-2014) Adjunct Professor (겸임교수), School of Electrical and Computer Engineering, Georgia Institute of Technology
- (2003-3004) Researcher, Embedded OS Team, ETRI, Daejeon, Korea
System Software Design and Development, Distributed File Systems, Key-value Store/Caches, In-Situ Analysis, Out-of-Core Computing, Non-volatile Memory
- I'm broadly interested in the intersection of cutting-edge technologies in hardware, system software, and application spanning a diverse spectrum of environments from enterprise computing to embedded domain.
- My recent research is central to software defined storage, focusing on storage, sharing, and transfer of big data, towards high performance big data platform and infrastructure.
Recent Research Topics:
- De-duplication for Ceph file systems
- Developing a Common Communication Interface (CCI) based network file systems
- Integrating search and discovery services into file systems
- Image storage system for scalable training of deep neural networks
- Developing a unified virtual file system layer combining different mass storage systems for big data storage system
- Providing a single name space, and offering a scalable meta data service framework
- Developing a fault tolerant data transfer application over terabits network
- Defining a fault tolerant messaging protocol and optimizing space requirement for maintaining log information for recovery in failure
- End-to-end analysis-aware data placement in virtual data facility
- Hospital data management systems for joing treatment of health service (from data collection to storage and management)
The following topics are some of research topics with my interest.
"Science discovery services from traditional block based storage systems" - How to integrate scientific search and discovery services into file and storage systems?
Traditional block based file systems have been developed to efficiently offer basic I/O services and space management. This design decoupled computation and data storage, and data management such as search and science discovery services have relied upon a separate service running with data atop the storage system. On the other hand, in big data era, high data production rate hurdled data management cost and increased the data movement cost between computation resources and storage systems. Thus, at this research, we are investigating the problems of lack of data management services in traditional file systems, and research to develop science discovery services for extreme-scale storage systems.
"Data coupling for big data services in geo-distributed data centers" - How to couple geo-dispersed datasets in Cloud environment?
One of the challenges with the big data is the data movement across the data centers geographically. Multiple bottlenecks exist along the end-to-end path from source to sink. Data storage infrastructure at both the source and sink and its interplay with the wide area network are increasingly the bottle neck to achieve high performance. We addresses these kind of challenges by developing a new bulk data movement framework called LADS (Layout-Aware Data Scheduling) for terabit networks. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS uses the CCI (Common Communication Interface) in lieu of the sockets interface to use zero-copy, OS-bypass hardware when available. At present research is going on for extending LADS Framework to support fault tolerance.
"Image storage systems" - How to build an image storage system for scalable training of deep neural networks?
Storing images in database is much easier to manage than storing them in the file system. Of Course, the classic argument against the database is that it is slow. Retrieving images from database takes longer than retrieving an image from the file system. If the amount of the image data under consideration is small, then there is not much difference between the two methods. Considering the pace at which image data is growing in the recent past, need to address the problems associated with the file system in order to improve the Image Storage Systems. Our focus will be mainly on building an image storage system for scalable training of deep neural networks.
- [TPDS'16] Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure, Youngjae Kim, Scott Atchley, Geoffroy R. Valle ́e, Matt Lee, Galen M. Shipman, IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS 2016) (Impact Factor: 2.17, According to Thomson Reuters' 2013 Journal Citation Report)
- [DAC'16-Poster] Synchronous Independent Write Cache: A Novel SSD-aware Write Cache for RAID, Junghee Lee, Youngjae Kim, Kalidas Ganesh, Joonyoung Paik, In Proceedings of the 53rd Design Automation Conference (DAC), Austin, TX, June, 2016.
- [USENIX FAST'15] LADS: Optimizing Data Transfers using Layout-Aware Data Scheduling, Youngjae Kim, Scott Atchley, Geoffroy R. Valle ́e, Galen M. Shipman, In Proceedings of the USENIX Conference on File and Storage Technologies (USENIX FAST 2015), San Jose, CA, February, 2015. (Acceptance Rate: 28/130 = 21.5%)
- [SC'15] AnalyzeThis: An Analysis Workflow-Aware Storage System, Hyogi Sim, Youngjae Kim, Sudharshan Vazhkudai, Devesh Tiwari, Ali. Anwar, Ali Butt, Lavanya Ramakrishnan, In Proceedings of the 2015 ACM/IEEE International Conference on High Performance Computing, Networking, Storage and Analysis (SC 2015), Austin, TX, November, 2015. (Acceptance Rate: 79/358 = 22.0%)