Medical AI
2023
[2023.03.03] Soomin Chung (presentation link) Keyword: Supervised learning, Loss
• Supervised contrastive learning. Adveances in neural information processing systems 33 (2020) (link)[2023.03.10] Junmo Kim (presentation link) Keyword: Continual Learning, Prompt-based Learning
• Learning To Prompt for Continual Learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022) (link)
[2023.03.24] Jisoo Lee (presentation link) Keyword: Diffusion model, Generative model
• Denoising Diffusion Probabilistic Model. Adveances in neural information processing systems 33 (2020) (link)
[2023.03.31] Jungyun Oh (presentation link) Keyword: Self-Supervised Learning, Anomaly Detection
• Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13576-13586).(2022) (link)
[2023.04.07] Doyun Kwon (presentation link) Keyword: Continual learning, Training regiems
• Understanding the role of training regimes in continual learning. Advances in Neural Information Processing Systems 33 (pp. 7308-7320).(2020) (link)
[2023.05.03] Soomin Chung (presentation link) Keyword: Curriculum learning, Robust learning
• Robust curriculum learning: from clean label detection to noisy label self-correction. International Conference on Learning Representations, (2021) (link)
[2023.05.12] Junmo Kim (presentation link) Keyword: Multimodal data, Pretrained model, MIMIC-IV
• Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digital Medicine, 5, 149 (2022) (link)
[2023.05.19] Sayoon Park (presentation link) Keyword: Audio self-supervised learning, Quantization module
• wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems 33 (2020) (link)
[2023.05.26] Jisoo Lee (presentation link) Keyword: Contrastive Learning, Time-series representation learning
• TS2Vec: Towards Universal Representation of Time Series. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (2022) (link)
[2023.06.16] Jungyun Oh (presentation link) Keyword: Multi-Time Attention Networks, Irregularly Multivariate Time-series
• Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318. (2021) (link)
[2023.06.23] Doyun Kwon (presentation link) Keyword: Transformer, Pre-trained Model
• A foundational vision transformer improves diagnostic performance for electrocardiograms. NPJ Digital Medicine, 6(1), 108. (2023) (link)
[2023.07.14] Soomin Chung (presentation link) Keyword : Time-series, Transformer, Representation learning
• TARNet: Task-Aware Reconstruction for Time-Series Transformer . Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, (pp. 212-220). (2022) (link)
[2023.07.21] Junmo Kim (presentation link) Keyword : Calibration
• A tutorial on calibration measurements and calibration models for clinical prediction models. Journal of the American Medical Informatics Association, 27(4) (2020) (link)
[2023.07.28] Sayoon Park (presentation link) Keyword : Audio spectrogram transformer, Self-supervised pretraining
• SSAST: Self-Supervised Audio Spectrogram Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36, 10 (2022) (link)
[2023.08.04] Jisoo Lee (presentation link) Keyword : Tree-based models, Tabular data
• Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems, 35 (2022) (link)
[2023.08.11] Jungyun Oh (presentation link) Keyword : Semantic segmentation, Transformer
• Per-Pixel Classification is Not All You Need for Semantic Segmentation, Advances in Neural Information Processing Systems, 34 (2021) (link)
[2023.08.18] Doyun Kwon (presentation link) Keyword : Self-supervised learning, Transformer
• Masked autoencoders are scalable vision learners. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 16000-16009) (2022) (link)
[2023.08.25] Sommin Chung (presentation link) Keyword : Longitudinal study, EHR, GAN
• Clinical-GAN : Trajectory Forecasting of Clinical Events using Transformer and Generative Adversarial Networks. Artificial Intelligence in Medicine, 138 (2023) (link)
[2023.09.01] Junmo Kim (presentation link) Keyword : Causal inference
• Toward Causal Representation Learning. Proceedings of the IEEE, 109, 5 (2021) (link)
[2023.09.07] Sayoon Park (presentation link) Keyword : Self-supervised learning
• Self-supervised learning in medicine and healthcare. Nature Biomedical Engineering, 6, 12 (2022) (link)
[2023.09.21] Jisoo Lee (presentation link) Keyword : Multimodal self-supervised learning, Transformer
• VATT : Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. Advances in Neural Information Processing Systems, 34 (2021) (link)
[2023.10.05] Jungyun Oh (presentation link) Keyword : CNN based model refinements
• Bag of Tricks for Image Classification with Convolutional Neural Networks. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 558-567) (2019) (link)
[2023.10.12] Doyun Kwon (presentation link) Keyword : Multi-modal, Transformer
• Multimodal Learning with Transformers: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2023) (link)
[2023.10.19] Sumin Chung (presentation link) Keyword : Pretraining, BERT, ECG Language Processing
• ECGBERT: Understanding Hidden Language of ECGs with Self-Supervised Representation Learning. arXiv preprint, (2023) (link)
[2023.11.2] Junmo Kim (presentation link) Keyword :
• Learning of Cluster-based Feature Importance for Electronic Health Record Time-series. International Conference on Machine Learning, PMLR, (pp. 161-179) (2022) (link)
[2023.11.9] Sayoon Park (presentation link) Keyword : Auscultation sound, Deep learning
• DeepBreath—automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries. NPJ digital medicine, 6, 1 (2023) (link)
[2023.11.16] Jisoo Lee (presentation link) Keyword : Self-/Semi- Supervised Learning, Tabular data
• VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain. Advances in Neural Information Processing Systems, 33, (2020) (link)
[2023.11.23] Jungyun Oh (presentation link) Keyword : Time series forecasting, Isometric convolution
• MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting. International Conference on Learning Representations, (2023) (link)
[2023.12.14] Doyun Kwon (presentation link) Keyword : Self-supervised learning, Masked Time-series Modeling
• SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling. Advances in Neural Information Processing Systems, (2023) (link)
[2023.12.21] Sumin Chung (presentation link) Keyword : Domain transfer, pretraining
• One Fits All:Power General Time Series Analysis by Pretrained LM. Advances in Neural Information Processing Systems, (2023) (link)
2024
[2024.01.10] Junmo Kim (presentation link) Keyword: Digital Twin, Electronic Health Records(EHR), Pre-training
• About Digital Twin & Pretrained EHR - (1)[2024.01.10] SaYoon Park (presentation link) Keyword: EEG, Self-supervised learning, Pre-trained model
• Neuro-GPT: Developing A Foundation Model for EEG. arXiv preprint (2023) (link)[2024.01.17] Jisoo Lee (presentation link) Keyword: Multi-modal self-spervised learning, Time series
• Sequential Multi-Dimensional Self-Supervised Learning for Clinical Time Series. International Conference on Machine Learning (pp. 28531-28548) (2023) (link)[2024.01.17] Jungyun Oh (presentation link) Keyword: Pruning, Depth-wise convolution
• Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization. Thirty-First International Joint Conference on Artificial Intelligence (2022) (link)[2024.01.24] Doyun Kwon (presentation link) Keyword: Transformer, Self-supervised learning, Forecasting
• A Time Series is Worth 64 Words: Long-term Forecasting with Transformer. The Eleventh International Conference on Learning Representations (2023) (link)[2024.01.24] Soomin Chung (presentation link) Keyword: Time series, Transformer, Multi-variate
• iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. The Twelfth International Conference on Learning Representations (2024) (link)[2024.01.31] Dareen Eom (presentation link) Keyword: Selective prediction, Self-evaluation, Soft prompt tuning
• Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs. Conference on Empirical Methods in Natural Language Processing (2023) (link)[2024.01.31] Chaiho Shin (presentation link) Keyword: Large corpus similarity search, In-context pre-training
• In-Context Pretraining: Language Modeling Beyond Document Boundaries. The Twelfth International Conference on Learning Representations (2024) (link)[2024.02.07] Junmo Kim (presentation link) Keyword: Digital Twin, Electronic Health Records(EHR), Pre-training
• About Digital Twin & Pretrained EHR - (2)[2024.02.07] Sayoon Park (presentation link) Keyword: Self-supervised learning, Phonocardiogram (PCG)
• On the Out-Of-Distribution Robustness of Self-Supervised Representation Learning for Phonocardiogram Signals. arXiv preprint (2023) (link)[2024.02.14] Jisoo Lee (presentation link) Keyword: Adversarial fine-tuning, Diffusion probabilistic model, Respiratory sound
• Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance. Deep Generative Models for Health Workshop NeurIPS (2023) (link)[2024.02.14] Jungyun Oh (presentation link) Keyword: Unsupervised representation learning, Temporal neighborhood
• Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. International Conference on Learning Representations (2021) (link)[2024.02.21] Doyun Kwon (presentation link) Keyword: LLM, Multi-modal, Forecasting
• Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. International Conference on Learning Representations (2024) (link)[2024.02.21] Soomin Chung (presentation link) Keyword:
• Contrastive Masked Autoencoders are Stronger Vision Learners. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) (link)[2024.02.28] Chaiho Shin (presentation link) Keyword: Representation learning, Compositional operations in embedding space, Contrastive objective
• Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional Operations. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (2023) (link)[2024.02.28] Dareen Eom (presentation link) Keyword: LLM, Reasoning capability, Negative data
• Turning Dust into Gold: Distilling Complex Reasoning Capabilities from LLMs by Leveraging Negative Data. Thirty-Eighth AAAI Conference on Artificial Intelligence (2024) (link)[2024.03.08] Kyuhee Lim (presentation link) Keyword: Keyword extraction, Keyword generation, Contrastive learning
• SimCKP: Simple Contrastive Learning of Keyphrase Representations. The 2023 Conference on Empirical Methods in Natural Language Processing. (2023) (link)[2024.03.15] Dongwoo Hyeon (presentation link) Keyword: Histopathology , Image retrieval, Self-supervised learning, Feature extraction
• RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Medical Image Analysis. (2023) (link)[2024.03.15] Jisoo Lee (presentation link) Keyword: Biosignal, Cross-data Learning, Transformer
• BIOT: Biosignal Transformer for Cross-data Learning in the Wild. Advances in Neural Information Processing Systems. (2024) (link)[2024.03.22] Sayoon Park (presentation link) Keyword:
• Rethinking Pseudo Labels for Semi-supervised Object Detection. AAAI conference on artificial intelligence 36, 2 (pp. 1314-1322). (2024) (link)[2024.03.22] Junmo Kim (presentation link) Keyword:
• 논문 제출부터 게재까지의 프로세스[2024.03.29] Jungyun Oh (presentation link) Keyword: Data pruning
• Deep Learning on a Data Diet: Finding Important Examples Early in Training. Advances in Neural Information Processing Systems. (2021) (link)[2024.03.29] Doyun Kwon (presentation link) Keyword: Contrastive learning, Foundation model, Biosignals
• Large-scale Training of Foundation Models for Wearable Biosignals. International Conference on Learning Representations. (2024) (link)[2024.04.05] Soomin Chung (presentation link) Keyword: SSL, MiM, CNN
• Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling. International Conference on Learning Representations. (2023) (link)[2024.04.05] Chaiho Shin (presentation link) Keyword: LLM based dataset construction, LLM based evaluation
• Gecko: Versatile Text Embeddings Distilled from Large Language Models. arXiv preprint. (2024) (link)[2024.04.12] Dareen Eom (presentation link) Keyword: Evolutionary algorithm, Prompt engineering
• Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. nternational Conference on Learning Representations (2024) (link)[2024.04.12] Kyuhee Lim (presentation link) Keyword: DEFT, LoRA, Active learning
• STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models. arXiv preprint. (2024) (link)