Thank you for agreeing to serve as a reviewer for our workshop: I Can't Believe It's Not Better (ICBINB): Challenges in Applied Deep Learning at ICLR 2025! Your role is essential in ensuring a high-quality and impactful program and we appreciate your time and effort in providing thorough and constructive feedback.
As a reviewer, we ask that you:
Submit timely and substantive reviews.
Adhere to the reviewing guidelines outlined below.
Follow ICLR Code of Ethics and Code of Conduct when assessing submissions.
The key steps and deadlines in the review process are as follows:
Create/Update OpenReview Profile – Ensure your profile is up to date with your most recent publications; If you do not yet have an OpenReview profile, kindly register for one with your email address that received invitations.
Conduct a Thorough Review – Provide clear, constructive, and detailed feedback, by evaluating submissions based on their clarity, problem statement, and relevance to the workshop theme. Please note that we allow two types of submission: long papers (up to 4 pages) and tiny papers (up to 2 pages). Reviewers are not required to read the appendix, but they may consult it if they find it helpful for evaluating the submission.
Provide a Final Recommendation – Submit your final review and overall assessment. If you expect not to be able to finish it before the deadline, please kindly contact us at cant.believe.it.is.not.better+workshop@gmail.com. (Reviewing Deadline: February 28, 2025 AoE.)
Nominate Papers for Awards – Indicate whether a paper should be considered for the Entropic Award (most surprising negative result) or the Didactic Award (most well-explained and pedagogical paper).
1. Read the Paper Carefully:
Understand the use case and the proposed scenarios of the deep learning application.
Identify the negative outcome and assess how well the paper investigates it.
2. Assess the Paper Based on These Criteria:
Scientific Rigor and Transparency: Are the methods and conclusions well-supported by evidence?
Novelty and Significance: Does the paper offer new insights of negative results?
Clarity of Writing: Is the paper well-structured and easy to understand?
Alignment with Workshop Topics: Does the paper present a reasonable argument that aligns with the theme of the workshop (Challenges in Applied Deep Learning)?
3. Write a Constructive Review:
Your review should include:
A short summary of the paper’s contributions and key findings.
Strengths and weaknesses of the submission.
Your recommendation with justification according to the above criterions.
Optional feedback to help authors improve their work.
Nomination for Awards: If applicable, indicate whether the paper is a strong candidate for the Entropic Award or the Didactic Award.
AI-Generated Papers
As part of a small experiment that we believe aligns with the theme of our workshop (and with approval from the central ICLR workshop chairs), we have included 3 AI-generated papers out of a total of 43 submissions. As a result, it is possible, though unlikely, that you may be assigned an AI-generated paper to review. If you prefer not to review AI-generated papers, please let us know by February 11 AoE by emailing our official email (cant.believe.it.is.not.better+workshop@gmail.com). We will review your assignments and reassign papers accordingly.
(Adapted from the ICLR 2025 Reviewer Guidelines)
All ICLR participants, including reviewers, are required to adhere to the ICLR Code of Ethics (https://iclr.cc/public/CodeOfEthics). All reviewers are required to read the Code of Ethics and adhere to it. The Code of Ethics applies to all conference participation, including paper submission, reviewing, and paper discussion.
As part of the review process, reviewers are asked to raise potential violations of the ICLR Code of Ethics. Note that authors are encouraged to discuss questions and potential issues regarding the Code of Ethics as part of their submission. This discussion is not counted against the maximum page limit of the paper and should be included as a separate section.
(Adapted from the ICLR 2025 Reviewer Guidelines)
The use of LLMs is allowed as a general-purpose assist tool. Authors and reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.
For additional guidance on reviewing, please refer to the ICLR 2025 Reviewer Guidelines here.
We appreciate your contributions to making this workshop a valuable and insightful event. Thank you for your dedication and expertise! Please feel free to reach out at cant.believe.it.is.not.better+workshop@gmail.com if you have any questions. We greatly appreciate your time and effort in supporting the review process.