Schedule
SDU will be hosted via Virtual Chair (https://aaai-2022.virtualchair.net/index.html) on March 1, from 10 AM to 2 PM (PST) in room Red 5. To get more information about workshop venues, please refer to AAAI Virtual Chair (https://aaai-2022.virtualchair.net/events_workshops.html)
All times are Pacific Time (Vancouver) UTC-8
Workshop Schedule
10:00: Opening Remarks
10:05: Keynote by Dina Demner-Fushman (Title: Biomedical Scientific Document Understanding: Research and Applications @ NLM)
10:35: Scientific Chart Summarization: Datasets and Improved Text Modeling (by Hao Tan)
10:45: In patent classification, domain adaptation is all you need (by Dimitrios Christofidellis)
10:55: Segmenting Technical Drawing Figures in US Patents (by Md Reshad Ul Hoque)
11:05: RerrFact: Reduced Evidence Retrieval Representations for Scientific Claim Verification (by Ashish Rana)
11:15: Neural Architectures for Biological Inter-Sentence Relation Extraction (by Enrique Noriega-Atala)
11:25: Zero-Shot and Few-Shot Classification of Biomedical Articles in Context of the COVID-19 Pandemic (by Simon Lupart)
11:35: TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets (by Susie Xi Rao)
11:45: Longitudinal Citation Prediction using Temporal Graph Neural Networks (by Andreas Nugaard Holm)
11:55: Extraction of Competing Models using Distant Supervision and Graph Ranking (by Swayatta Daw)
12:05: Deeper Clinical Document Understanding using Relation Extraction (by Hasham Ul Haq)
12:15: Fine-grained Intent Classification in the Legal Domain (by Ankan Mullick)
12:25: Coherence-based Second Chance Autoencoders for Document Understanding (by Saria Goudarzvand)
12:35: BAM: Benchmarking Argument Mining on Scientific Documents (by Florian Ruosch)
12:45: When to Use Which Neural Network? Finding the Right Neural Network Architecture for a Research Problem (by Michael Färber)
12:55: Sequence Labeling for Citation Field Extraction from Cyrillic Script References (by Igor Shapiro)
13:05: BERTicsson: A Recommender System For Troubleshooting (by Nuria Marzo I Grimalt)
13:15: Leveraging Positional Information to Automatically Emphasize Key Portions of a Text (by Sebastian Gehrmann)
13:25: SimCLAD: A Simple Framework for Contrastive Learning of Acronym Disambiguation (by Bin Li)
13:35: Acronym Extraction with Hybrid Strategies (by Siheng Li)
13:45: CABACE: Injecting Character Sequence Information and Domain Knowledge for Enhanced Acronym and Long-Form Extraction (by Nithish Kannen)
13:55: Closing Remarks
Poster Session
PSG: Prompt-based Sequence Generation for Acronym Extraction Disambiguate Scientific Acronyms (Bin Li)
Domain Adaptive Pretraining for Multilingual Acronym Extraction (Usama Yaseen)
Applying Multi-Task Reading Comprehension in Acronym Disambiguation (Yunpeng Tai)
Prompt-based Model for Acronym Disambiguation via Negative Sampling (Taiqiang Wu)
Acronym Identification using Transformers and Flair Framework (Fazlourrahman Balouchzahi)
Multilingual Acronym Disambiguation with Multi-choice Classification (Xinyu Zhu)
ADBCMM : Acronym Disambiguation by Building Counterfactuals and Multilingual Mixing (Yixuan Weng)
A Novel Initial Reminder Framework for Acronym Extraction (Xiusheng Huang)
ANACONDA: Adversarial training with iNtrust loss in ACrONym DisambiguAtion (Fei Xia)
An Ensemble Approach to Acronym Extraction using Transformers (Prashant Sharma)
T5 Encoder Based Acronym Disambiguation with Weak Supervision (Gwangho Song)