2025-09-16
2025-09-16
Suteera Seeha
📍Lehrstuhl für Medizinische Informatik, TU München
Named Entity Recognition (NER) in the medical domain often presents significant challenges due to the complexity and specificity of medical terminology, especially in lower-resource settings where annotated data is scarce. We have explored the performance of an exemplary large language model and a BERT-based model in the context of NER for German medical texts.
How did we methodologically compare LLaMA3.1 and GBERT?
What is th impact of different data sizes for training and their performance to simulate lower-resource conditions?
How did both LLMs compare in our evaluation