Medical AI

2023

Learning To Prompt for Continual Learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022) (link)

Denoising Diffusion Probabilistic Model. Adveances in neural information processing systems 33 (2020) (link)

Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13576-13586).(2022) (link)

Understanding the role of training regimes in continual learning. Advances in Neural Information Processing Systems 33 (pp. 7308-7320).(2020) (link)

Robust curriculum learning: from clean label detection to noisy label self-correction. International Conference on Learning Representations, (2021) (link)

Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digital Medicine, 5, 149 (2022) (link)

wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems 33 (2020) (link)

TS2Vec: Towards Universal Representation of Time Series. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (2022) (link)

Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318. (2021) (link)

A foundational vision transformer improves diagnostic performance for electrocardiograms. NPJ Digital Medicine, 6(1), 108. (2023) (link)

TARNet: Task-Aware Reconstruction for Time-Series Transformer . Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, (pp. 212-220). (2022) (link)

A tutorial on calibration measurements and calibration models for clinical prediction models. Journal of the American Medical Informatics Association, 27(4) (2020) (link)

SSAST: Self-Supervised Audio Spectrogram Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36, 10 (2022) (link)

Why do tree-based models still outperform deep learning on typical tabular data? Advances in Neural Information Processing Systems, 35 (2022) (link)

Per-Pixel Classification is Not All You Need for Semantic Segmentation, Advances in Neural Information Processing Systems, 34 (2021) (link)

Masked autoencoders are scalable vision learners. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 16000-16009) (2022) (link)

Clinical-GAN : Trajectory Forecasting of Clinical Events using Transformer and Generative Adversarial Networks. Artificial Intelligence in Medicine, 138 (2023) (link)

Toward Causal Representation Learning. Proceedings of the IEEE, 109, 5 (2021) (link)

Self-supervised learning in medicine and healthcare. Nature Biomedical Engineering, 6, 12 (2022) (link)

  VATT : Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. Advances in Neural Information Processing Systems, 34 (2021) (link)

  Bag of Tricks for Image Classification with Convolutional Neural Networks. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 558-567) (2019) (link)

  Multimodal Learning with Transformers: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence(2023) (link)

  ECGBERT: Understanding Hidden Language of ECGs with Self-Supervised Representation Learning. arXiv preprint(2023) (link)

  Learning of Cluster-based Feature Importance for Electronic Health Record Time-series. International Conference on Machine Learning, PMLR, (pp. 161-179) (2022) (link)

  DeepBreath—automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries. NPJ digital medicine, 6, 1 (2023) (link)

  VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain. Advances in Neural Information Processing Systems, 33, (2020) (link)

  MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting. International Conference on Learning Representations, (2023) (link)

  SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling. Advances in Neural Information Processing Systems, (2023) (link)

  One Fits All:Power General Time Series Analysis by Pretrained LM. Advances in Neural Information Processing Systems, (2023) (link)


2024