Thank you for agreeing to serve as a reviewer for our workshop: I Can't Believe It's Not Better (ICBINB): Where Large Language Models Need to Improve at ICLR 2026! Your role is essential in ensuring a high-quality and impactful program and we appreciate your time and effort in providing thorough and constructive feedback.
As a reviewer, we ask that you:
Submit timely and substantive reviews.
Adhere to the reviewing guidelines outlined below.
Follow the ICLR Code of Ethics and Code of Conduct when assessing submissions.
The key steps and deadlines in the review process are as follows:
Create/Update OpenReview Profile – Ensure your profile is up to date with your most recent publications; If you do not yet have an OpenReview profile, kindly register for one with your email address that received invitations.
Conduct a Thorough Review – Provide clear, constructive, and detailed feedback, by evaluating submissions based on their clarity, problem statement, and relevance to the workshop theme. Please note that we allow two types of submission: long papers (up to 4 pages) and tiny papers (up to 2 pages). Reviewers are not required to read the appendix, but they may consult it if they find it helpful for evaluating the submission.
Provide a Final Recommendation – Submit your final review and overall assessment. If you expect not to be able to finish it before the deadline, please kindly contact us at cant.believe.it.is.not.better+workshop@gmail.com. (Reviewing Deadline: February 25, 2026 AoE.)
Nominate Papers for Awards – Indicate whether a paper should be considered for the Entropic Award (most surprising negative result) or the Didactic Award (most well-explained and pedagogical paper).
1. Read the Paper Carefully:
Understand the research question, setting, and assumptions for the investigation of the LLM’s limitations or failures.
Identify the negative outcome and assess how well the paper investigates it.
Assess how clearly and thoroughly the paper investigates and characterizes this limitation or failure mode.
2. Assess the Paper Based on These Criteria:
Scientific Rigor and Transparency: Are the methods and conclusions well-supported by evidence?
Novelty and Significance: Does the paper offer new insights into negative results?
Clarity of Writing: Is the paper well-structured and easy to understand?
Alignment with Workshop Topics: Does the paper present a reasonable argument that aligns with the theme of the workshop (Where Large Language Models Need to Improve)?
3. Write a Constructive Review and Submit it before the Deadline (February 25th, 2026):
Your review should include:
A short summary of the paper’s contributions and key findings.
Strengths and weaknesses of the submission.
Your recommendation with justification according to the above criteria.
Optional feedback to help authors improve their work.
Nomination for Awards: If applicable, indicate whether the paper is a strong candidate for the Entropic Award or the Didactic Award.
(Adapted from the ICLR 2026 Reviewer Guidelines)
All ICLR participants, including reviewers, are required to adhere to the ICLR Code of Ethics (https://iclr.cc/public/CodeOfEthics). All reviewers are required to read the Code of Ethics and adhere to it. The Code of Ethics applies to all conference participation, including paper submission, reviewing, and paper discussion.
As part of the review process, reviewers are asked to raise potential violations of the ICLR Code of Ethics. Note that authors are encouraged to discuss questions and potential issues regarding the Code of Ethics as part of their submission. This discussion is not counted against the maximum page limit of the paper and should be included as a separate section.
(Adapted from the ICLR 2026 Reviewer Guidelines)
The use of LLMs is allowed as a general-purpose writing assistance tool. However, reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism, scientific misconduct, or low quality (e.g., fabrication of facts). Reviews that exhibit such issues may be flagged as low quality, thus putting the reviewers’ papers at risk for desk rejection (see above).
For additional guidance on reviewing, please refer to the ICLR 2026 Reviewer Guidelines here.
We appreciate your contributions to making this workshop a valuable and insightful event. Thank you for your dedication and expertise! Please feel free to reach out at cant.believe.it.is.not.better+workshop@gmail.com if you have any questions. We greatly appreciate your time and effort in supporting the review process.