I am a PhD student at CS faculty at the Technion and advised by Yonatan Belinkov. In general, I am interested in improving the truthfulness and safety of LLMs. My research focuses on hallucinations, safety, and interpretability in LLMs. I received the Council for Higher Education (VATAT) Scholarship for PhD students in data science and artificial intelligence.
If you find it interesting, feel free to reach out!
This September, I’ll be visiting Fazl Barez's group at Oxford University, followed in early October by a visit to Shay Cohen's group at the University of Edinburgh. If you’re nearby and would like to connect, feel free to reach out.
Trust Me, I'm Wrong: High-Certainty Hallucinations in LLMs
Adi Simhi, Itay Itzhak, Fazl Barez, Gabriel Stanovsky, Yonatan Belinkov
Arxiv 2025
Distinguishing ignorance from error in llm hallucinations
Adi Simhi, Jonathan Herzig, Idan Szpektor, Yonatan Belinkov
Arxiv 2024
Constructing benchmarks and interventions for combating hallucinations in llms
Adi Simhi, Jonathan Herzig, Idan Szpektor, Yonatan Belinkov
Arxiv 2024