Improving performance of LLMs on Science and Math Problems
LLMs don't perform well on answering Science and Math problems!
This is very clear, as science and mathematics problems do not deal specifically with just text - they include special symbols to denote abstract objects. In this subpage, we explore some of the research done on improving LLMs on the task of solving science and math problems.
Relevant Research
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. (2021). Measuring mathematical problem solving with the math dataset. https://arxiv.org/abs/2103.03874
Large Language Models are Zero-Shot Reasoners https://arxiv.org/pdf/2205.11916
Solving Quantitative Reasoning Problems with Language Models https://arxiv.org/abs/2206.14858
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/abs/2201.11903
Self-Consistency Improves Chain of Thought Reasoning in Language Models https://arxiv.org/abs/2203.11171
Large Language Models for Mathematical Reasoning: Progresses and Challenges https://arxiv.org/abs/2402.00157
Tree of Thoughts (ToT): Deliberate Problem Solving with Large Language Models https://arxiv.org/abs/2305.10601