🎓 We are excited to announce that we’ve started a Multimodal Research Paper Reading Group! It's open to everyone.
Our goal is to explore state-of-the-art research in multimodal learning, fusion strategies, and representation modeling—integrating insights from vision, language, audio, and beyond.
💬 Join our discussions here: https://lnkd.in/eairvusS
📚 Recent reading topics include:
📘 “Foundations & Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions.”
📄 “Trainee Action Recognition through Interaction Analysis in CCATT Mixed-Reality Training” — a multimodal analytics fusion framework leveraging Cognitive Task Analysis for developing training assessment systems.
📘 "Segment Anything"
📄 "QWEN VL 3.5"
📄 "YOLO-WORLD"
Reading Schedule: https://vanderbilt365-my.sharepoint.com/:x:/g/personal/divya_mereddy_vanderbilt_edu/EYR3NtZnvMRKiL3kmiJRNhkBm73N4LGXbJehYKYsXLA5Pg?e=F2ErdP